00:00:00.001 Started by upstream project "autotest-per-patch" build number 132846 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.042 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.062 Fetching changes from the remote Git repository 00:00:00.064 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.098 Using shallow fetch with depth 1 00:00:00.098 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.098 > git --version # timeout=10 00:00:00.150 > git --version # 'git version 2.39.2' 00:00:00.150 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.193 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.194 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.037 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.048 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.059 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.059 > git config core.sparsecheckout # timeout=10 00:00:03.069 > git read-tree -mu HEAD # timeout=10 00:00:03.083 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.099 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.099 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.195 [Pipeline] Start of Pipeline 00:00:03.208 [Pipeline] library 00:00:03.210 Loading library shm_lib@master 00:00:03.210 Library shm_lib@master is cached. Copying from home. 00:00:03.227 [Pipeline] node 00:00:03.235 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:03.236 [Pipeline] { 00:00:03.246 [Pipeline] catchError 00:00:03.248 [Pipeline] { 00:00:03.261 [Pipeline] wrap 00:00:03.271 [Pipeline] { 00:00:03.279 [Pipeline] stage 00:00:03.280 [Pipeline] { (Prologue) 00:00:03.297 [Pipeline] echo 00:00:03.299 Node: VM-host-WFP1 00:00:03.305 [Pipeline] cleanWs 00:00:03.315 [WS-CLEANUP] Deleting project workspace... 00:00:03.315 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.322 [WS-CLEANUP] done 00:00:03.535 [Pipeline] setCustomBuildProperty 00:00:03.603 [Pipeline] httpRequest 00:00:03.907 [Pipeline] echo 00:00:03.909 Sorcerer 10.211.164.20 is alive 00:00:03.917 [Pipeline] retry 00:00:03.919 [Pipeline] { 00:00:03.929 [Pipeline] httpRequest 00:00:03.933 HttpMethod: GET 00:00:03.933 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.934 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.935 Response Code: HTTP/1.1 200 OK 00:00:03.936 Success: Status code 200 is in the accepted range: 200,404 00:00:03.936 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.082 [Pipeline] } 00:00:04.095 [Pipeline] // retry 00:00:04.102 [Pipeline] sh 00:00:04.384 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.398 [Pipeline] httpRequest 00:00:04.783 [Pipeline] echo 00:00:04.784 Sorcerer 10.211.164.20 is alive 00:00:04.792 [Pipeline] retry 00:00:04.793 [Pipeline] { 00:00:04.806 [Pipeline] httpRequest 00:00:04.811 HttpMethod: GET 00:00:04.812 URL: http://10.211.164.20/packages/spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:00:04.812 Sending request to url: http://10.211.164.20/packages/spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:00:04.814 Response Code: HTTP/1.1 200 OK 00:00:04.814 Success: Status code 200 is in the accepted range: 200,404 00:00:04.815 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:00:20.704 [Pipeline] } 00:00:20.722 [Pipeline] // retry 00:00:20.730 [Pipeline] sh 00:00:21.017 + tar --no-same-owner -xf spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:00:23.567 [Pipeline] sh 00:00:23.849 + git -C spdk log --oneline -n5 00:00:23.849 4dfeb7f95 mk/spdk.common.mk Use pattern substitution instead of prefix removal 00:00:23.849 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:23.849 66289a6db build: use VERSION file for storing version 00:00:23.849 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:23.849 cec5ba284 nvme/rdma: Register UMR per IO request 00:00:23.868 [Pipeline] writeFile 00:00:23.883 [Pipeline] sh 00:00:24.166 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:24.178 [Pipeline] sh 00:00:24.526 + cat autorun-spdk.conf 00:00:24.527 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.527 SPDK_TEST_NVME=1 00:00:24.527 SPDK_TEST_FTL=1 00:00:24.527 SPDK_TEST_ISAL=1 00:00:24.527 SPDK_RUN_ASAN=1 00:00:24.527 SPDK_RUN_UBSAN=1 00:00:24.527 SPDK_TEST_XNVME=1 00:00:24.527 SPDK_TEST_NVME_FDP=1 00:00:24.527 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:24.533 RUN_NIGHTLY=0 00:00:24.535 [Pipeline] } 00:00:24.548 [Pipeline] // stage 00:00:24.563 [Pipeline] stage 00:00:24.566 [Pipeline] { (Run VM) 00:00:24.579 [Pipeline] sh 00:00:24.861 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:24.861 + echo 'Start stage prepare_nvme.sh' 00:00:24.861 Start stage prepare_nvme.sh 00:00:24.861 + [[ -n 5 ]] 00:00:24.861 + disk_prefix=ex5 00:00:24.861 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:24.861 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:24.861 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:24.861 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.861 ++ SPDK_TEST_NVME=1 00:00:24.861 ++ SPDK_TEST_FTL=1 00:00:24.861 ++ SPDK_TEST_ISAL=1 00:00:24.861 ++ SPDK_RUN_ASAN=1 00:00:24.861 ++ SPDK_RUN_UBSAN=1 00:00:24.861 ++ SPDK_TEST_XNVME=1 00:00:24.861 ++ SPDK_TEST_NVME_FDP=1 00:00:24.861 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:24.861 ++ RUN_NIGHTLY=0 00:00:24.861 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:24.861 + nvme_files=() 00:00:24.861 + declare -A nvme_files 00:00:24.861 + backend_dir=/var/lib/libvirt/images/backends 00:00:24.861 + nvme_files['nvme.img']=5G 00:00:24.861 + nvme_files['nvme-cmb.img']=5G 00:00:24.861 + nvme_files['nvme-multi0.img']=4G 00:00:24.861 + nvme_files['nvme-multi1.img']=4G 00:00:24.861 + nvme_files['nvme-multi2.img']=4G 00:00:24.861 + nvme_files['nvme-openstack.img']=8G 00:00:24.861 + nvme_files['nvme-zns.img']=5G 00:00:24.861 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:24.861 + (( SPDK_TEST_FTL == 1 )) 00:00:24.861 + nvme_files["nvme-ftl.img"]=6G 00:00:24.861 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:24.861 + nvme_files["nvme-fdp.img"]=1G 00:00:24.861 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:24.861 + for nvme in "${!nvme_files[@]}" 00:00:24.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:24.861 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.861 + for nvme in "${!nvme_files[@]}" 00:00:24.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:00:24.861 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:24.861 + for nvme in "${!nvme_files[@]}" 00:00:24.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:24.861 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.861 + for nvme in "${!nvme_files[@]}" 00:00:24.861 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:25.121 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:25.121 + for nvme in "${!nvme_files[@]}" 00:00:25.121 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:25.121 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.121 + for nvme in "${!nvme_files[@]}" 00:00:25.121 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:25.121 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.121 + for nvme in "${!nvme_files[@]}" 00:00:25.121 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:25.121 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.121 + for nvme in "${!nvme_files[@]}" 00:00:25.121 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:00:25.121 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:25.121 + for nvme in "${!nvme_files[@]}" 00:00:25.121 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:25.380 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.380 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:25.380 + echo 'End stage prepare_nvme.sh' 00:00:25.380 End stage prepare_nvme.sh 00:00:25.391 [Pipeline] sh 00:00:25.672 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:25.672 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:25.672 00:00:25.672 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:25.672 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:25.672 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:25.672 HELP=0 00:00:25.672 DRY_RUN=0 00:00:25.672 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:00:25.672 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:25.672 NVME_AUTO_CREATE=0 00:00:25.672 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:00:25.672 NVME_CMB=,,,, 00:00:25.672 NVME_PMR=,,,, 00:00:25.672 NVME_ZNS=,,,, 00:00:25.672 NVME_MS=true,,,, 00:00:25.672 NVME_FDP=,,,on, 00:00:25.672 SPDK_VAGRANT_DISTRO=fedora39 00:00:25.672 SPDK_VAGRANT_VMCPU=10 00:00:25.672 SPDK_VAGRANT_VMRAM=12288 00:00:25.672 SPDK_VAGRANT_PROVIDER=libvirt 00:00:25.672 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:25.672 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:25.672 SPDK_OPENSTACK_NETWORK=0 00:00:25.672 VAGRANT_PACKAGE_BOX=0 00:00:25.672 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:25.672 FORCE_DISTRO=true 00:00:25.672 VAGRANT_BOX_VERSION= 00:00:25.672 EXTRA_VAGRANTFILES= 00:00:25.672 NIC_MODEL=e1000 00:00:25.672 00:00:25.672 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:00:25.672 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:28.207 Bringing machine 'default' up with 'libvirt' provider... 00:00:29.587 ==> default: Creating image (snapshot of base box volume). 00:00:29.846 ==> default: Creating domain with the following settings... 00:00:29.846 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733924481_74e2e7703337f8fcc2dd 00:00:29.846 ==> default: -- Domain type: kvm 00:00:29.846 ==> default: -- Cpus: 10 00:00:29.846 ==> default: -- Feature: acpi 00:00:29.846 ==> default: -- Feature: apic 00:00:29.846 ==> default: -- Feature: pae 00:00:29.846 ==> default: -- Memory: 12288M 00:00:29.846 ==> default: -- Memory Backing: hugepages: 00:00:29.846 ==> default: -- Management MAC: 00:00:29.846 ==> default: -- Loader: 00:00:29.846 ==> default: -- Nvram: 00:00:29.846 ==> default: -- Base box: spdk/fedora39 00:00:29.846 ==> default: -- Storage pool: default 00:00:29.846 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733924481_74e2e7703337f8fcc2dd.img (20G) 00:00:29.846 ==> default: -- Volume Cache: default 00:00:29.846 ==> default: -- Kernel: 00:00:29.846 ==> default: -- Initrd: 00:00:29.846 ==> default: -- Graphics Type: vnc 00:00:29.846 ==> default: -- Graphics Port: -1 00:00:29.846 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.846 ==> default: -- Graphics Password: Not defined 00:00:29.846 ==> default: -- Video Type: cirrus 00:00:29.846 ==> default: -- Video VRAM: 9216 00:00:29.846 ==> default: -- Sound Type: 00:00:29.846 ==> default: -- Keymap: en-us 00:00:29.846 ==> default: -- TPM Path: 00:00:29.846 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.846 ==> default: -- Command line args: 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:29.846 ==> default: -> value=-drive, 00:00:29.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:29.846 ==> default: -> value=-drive, 00:00:29.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:29.846 ==> default: -> value=-drive, 00:00:29.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.846 ==> default: -> value=-drive, 00:00:29.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.846 ==> default: -> value=-drive, 00:00:29.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:29.846 ==> default: -> value=-drive, 00:00:29.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:29.846 ==> default: -> value=-device, 00:00:29.846 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.105 ==> default: Creating shared folders metadata... 00:00:30.105 ==> default: Starting domain. 00:00:32.012 ==> default: Waiting for domain to get an IP address... 00:00:53.944 ==> default: Waiting for SSH to become available... 00:00:53.944 ==> default: Configuring and enabling network interfaces... 00:00:57.316 default: SSH address: 192.168.121.96:22 00:00:57.316 default: SSH username: vagrant 00:00:57.316 default: SSH auth method: private key 00:00:59.851 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:09.833 ==> default: Mounting SSHFS shared folder... 00:01:11.210 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.210 ==> default: Checking Mount.. 00:01:12.588 ==> default: Folder Successfully Mounted! 00:01:12.588 ==> default: Running provisioner: file... 00:01:13.969 default: ~/.gitconfig => .gitconfig 00:01:14.227 00:01:14.227 SUCCESS! 00:01:14.227 00:01:14.227 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:14.227 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:14.227 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:14.227 00:01:14.235 [Pipeline] } 00:01:14.249 [Pipeline] // stage 00:01:14.257 [Pipeline] dir 00:01:14.258 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:14.259 [Pipeline] { 00:01:14.271 [Pipeline] catchError 00:01:14.273 [Pipeline] { 00:01:14.284 [Pipeline] sh 00:01:14.563 + vagrant ssh-config --host vagrant 00:01:14.563 + sed -ne /^Host/,$p 00:01:14.563 + tee ssh_conf 00:01:17.853 Host vagrant 00:01:17.853 HostName 192.168.121.96 00:01:17.853 User vagrant 00:01:17.853 Port 22 00:01:17.853 UserKnownHostsFile /dev/null 00:01:17.853 StrictHostKeyChecking no 00:01:17.853 PasswordAuthentication no 00:01:17.853 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:17.853 IdentitiesOnly yes 00:01:17.853 LogLevel FATAL 00:01:17.853 ForwardAgent yes 00:01:17.853 ForwardX11 yes 00:01:17.853 00:01:17.866 [Pipeline] withEnv 00:01:17.868 [Pipeline] { 00:01:17.881 [Pipeline] sh 00:01:18.160 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:18.160 source /etc/os-release 00:01:18.160 [[ -e /image.version ]] && img=$(< /image.version) 00:01:18.160 # Minimal, systemd-like check. 00:01:18.160 if [[ -e /.dockerenv ]]; then 00:01:18.160 # Clear garbage from the node's name: 00:01:18.160 # agt-er_autotest_547-896 -> autotest_547-896 00:01:18.160 # $HOSTNAME is the actual container id 00:01:18.160 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:18.160 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:18.160 # We can assume this is a mount from a host where container is running, 00:01:18.160 # so fetch its hostname to easily identify the target swarm worker. 00:01:18.160 container="$(< /etc/hostname) ($agent)" 00:01:18.160 else 00:01:18.160 # Fallback 00:01:18.160 container=$agent 00:01:18.160 fi 00:01:18.160 fi 00:01:18.160 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:18.160 00:01:18.493 [Pipeline] } 00:01:18.508 [Pipeline] // withEnv 00:01:18.516 [Pipeline] setCustomBuildProperty 00:01:18.530 [Pipeline] stage 00:01:18.532 [Pipeline] { (Tests) 00:01:18.548 [Pipeline] sh 00:01:18.827 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:19.097 [Pipeline] sh 00:01:19.376 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:19.647 [Pipeline] timeout 00:01:19.648 Timeout set to expire in 50 min 00:01:19.650 [Pipeline] { 00:01:19.663 [Pipeline] sh 00:01:19.942 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:20.510 HEAD is now at 4dfeb7f95 mk/spdk.common.mk Use pattern substitution instead of prefix removal 00:01:20.523 [Pipeline] sh 00:01:20.801 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:21.072 [Pipeline] sh 00:01:21.369 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:21.643 [Pipeline] sh 00:01:21.926 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:22.186 ++ readlink -f spdk_repo 00:01:22.186 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:22.186 + [[ -n /home/vagrant/spdk_repo ]] 00:01:22.186 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:22.186 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:22.186 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:22.186 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:22.186 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:22.186 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:22.186 + cd /home/vagrant/spdk_repo 00:01:22.186 + source /etc/os-release 00:01:22.186 ++ NAME='Fedora Linux' 00:01:22.186 ++ VERSION='39 (Cloud Edition)' 00:01:22.186 ++ ID=fedora 00:01:22.186 ++ VERSION_ID=39 00:01:22.186 ++ VERSION_CODENAME= 00:01:22.186 ++ PLATFORM_ID=platform:f39 00:01:22.186 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:22.186 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:22.186 ++ LOGO=fedora-logo-icon 00:01:22.186 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:22.186 ++ HOME_URL=https://fedoraproject.org/ 00:01:22.186 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:22.186 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:22.186 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:22.186 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:22.186 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:22.186 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:22.186 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:22.186 ++ SUPPORT_END=2024-11-12 00:01:22.186 ++ VARIANT='Cloud Edition' 00:01:22.186 ++ VARIANT_ID=cloud 00:01:22.186 + uname -a 00:01:22.186 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:22.186 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:22.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:23.012 Hugepages 00:01:23.012 node hugesize free / total 00:01:23.012 node0 1048576kB 0 / 0 00:01:23.012 node0 2048kB 0 / 0 00:01:23.012 00:01:23.012 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.012 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:23.012 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:23.012 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:23.012 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:23.013 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:23.013 + rm -f /tmp/spdk-ld-path 00:01:23.013 + source autorun-spdk.conf 00:01:23.013 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.013 ++ SPDK_TEST_NVME=1 00:01:23.013 ++ SPDK_TEST_FTL=1 00:01:23.013 ++ SPDK_TEST_ISAL=1 00:01:23.013 ++ SPDK_RUN_ASAN=1 00:01:23.013 ++ SPDK_RUN_UBSAN=1 00:01:23.013 ++ SPDK_TEST_XNVME=1 00:01:23.013 ++ SPDK_TEST_NVME_FDP=1 00:01:23.013 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.013 ++ RUN_NIGHTLY=0 00:01:23.013 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.013 + [[ -n '' ]] 00:01:23.013 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:23.013 + for M in /var/spdk/build-*-manifest.txt 00:01:23.013 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.013 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.272 + for M in /var/spdk/build-*-manifest.txt 00:01:23.272 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.272 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.272 + for M in /var/spdk/build-*-manifest.txt 00:01:23.272 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.272 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.272 ++ uname 00:01:23.272 + [[ Linux == \L\i\n\u\x ]] 00:01:23.272 + sudo dmesg -T 00:01:23.272 + sudo dmesg --clear 00:01:23.272 + dmesg_pid=5254 00:01:23.272 + sudo dmesg -Tw 00:01:23.272 + [[ Fedora Linux == FreeBSD ]] 00:01:23.272 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.272 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.272 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.272 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.272 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.272 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.272 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.272 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.272 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.272 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.272 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.272 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.272 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.272 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.272 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.272 13:42:16 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.272 13:42:16 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.272 13:42:16 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.272 13:42:16 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:23.272 13:42:16 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:23.272 13:42:16 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:23.272 13:42:16 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:23.272 13:42:16 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:23.272 13:42:16 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:23.272 13:42:16 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:23.272 13:42:16 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.272 13:42:16 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:23.272 13:42:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:23.272 13:42:16 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.532 13:42:16 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.532 13:42:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:23.532 13:42:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:23.532 13:42:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.532 13:42:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.532 13:42:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.532 13:42:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.532 13:42:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.532 13:42:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.532 13:42:16 -- paths/export.sh@5 -- $ export PATH 00:01:23.532 13:42:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.532 13:42:16 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:23.532 13:42:16 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:23.532 13:42:16 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733924536.XXXXXX 00:01:23.532 13:42:16 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733924536.mirmp3 00:01:23.532 13:42:16 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:23.532 13:42:16 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:23.532 13:42:16 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:23.532 13:42:16 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:23.532 13:42:16 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.532 13:42:16 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:23.532 13:42:16 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:23.532 13:42:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.532 13:42:16 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:23.532 13:42:16 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:23.532 13:42:16 -- pm/common@17 -- $ local monitor 00:01:23.532 13:42:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.532 13:42:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.532 13:42:16 -- pm/common@25 -- $ sleep 1 00:01:23.532 13:42:16 -- pm/common@21 -- $ date +%s 00:01:23.532 13:42:16 -- pm/common@21 -- $ date +%s 00:01:23.532 13:42:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733924536 00:01:23.532 13:42:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733924536 00:01:23.532 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733924536_collect-vmstat.pm.log 00:01:23.532 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733924536_collect-cpu-load.pm.log 00:01:24.470 13:42:17 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:24.470 13:42:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.470 13:42:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.470 13:42:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:24.470 13:42:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.470 Wed Dec 11 01:42:17 PM UTC 2024 00:01:24.470 13:42:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.470 v25.01-rc1-1-g4dfeb7f95 00:01:24.470 13:42:17 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:24.470 13:42:17 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:24.470 13:42:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.470 13:42:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.470 13:42:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.470 ************************************ 00:01:24.470 START TEST asan 00:01:24.470 ************************************ 00:01:24.470 using asan 00:01:24.470 13:42:17 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:24.470 00:01:24.470 real 0m0.000s 00:01:24.470 user 0m0.000s 00:01:24.470 sys 0m0.000s 00:01:24.470 13:42:17 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.470 13:42:17 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.470 ************************************ 00:01:24.470 END TEST asan 00:01:24.470 ************************************ 00:01:24.729 13:42:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.729 13:42:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.729 13:42:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.729 13:42:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.729 13:42:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.729 ************************************ 00:01:24.729 START TEST ubsan 00:01:24.729 ************************************ 00:01:24.729 using ubsan 00:01:24.729 13:42:17 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:24.729 00:01:24.729 real 0m0.000s 00:01:24.729 user 0m0.000s 00:01:24.729 sys 0m0.000s 00:01:24.730 13:42:17 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.730 ************************************ 00:01:24.730 END TEST ubsan 00:01:24.730 ************************************ 00:01:24.730 13:42:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.730 13:42:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.730 13:42:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.730 13:42:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.730 13:42:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.730 13:42:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.730 13:42:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.730 13:42:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.730 13:42:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.730 13:42:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:24.730 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:24.730 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:25.298 Using 'verbs' RDMA provider 00:01:41.561 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:59.649 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:59.649 Creating mk/config.mk...done. 00:01:59.649 Creating mk/cc.flags.mk...done. 00:01:59.649 Type 'make' to build. 00:01:59.649 13:42:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:59.649 13:42:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:59.649 13:42:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:59.649 13:42:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.649 ************************************ 00:01:59.649 START TEST make 00:01:59.649 ************************************ 00:01:59.649 13:42:50 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:59.649 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:59.649 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:59.649 meson setup builddir \ 00:01:59.649 -Dwith-libaio=enabled \ 00:01:59.649 -Dwith-liburing=enabled \ 00:01:59.649 -Dwith-libvfn=disabled \ 00:01:59.649 -Dwith-spdk=disabled \ 00:01:59.649 -Dexamples=false \ 00:01:59.649 -Dtests=false \ 00:01:59.649 -Dtools=false && \ 00:01:59.649 meson compile -C builddir && \ 00:01:59.649 cd -) 00:01:59.928 The Meson build system 00:01:59.928 Version: 1.5.0 00:01:59.928 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:59.928 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:59.928 Build type: native build 00:01:59.928 Project name: xnvme 00:01:59.928 Project version: 0.7.5 00:01:59.928 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:59.928 C linker for the host machine: cc ld.bfd 2.40-14 00:01:59.928 Host machine cpu family: x86_64 00:01:59.928 Host machine cpu: x86_64 00:01:59.928 Message: host_machine.system: linux 00:01:59.928 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:59.928 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:59.928 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:59.928 Run-time dependency threads found: YES 00:01:59.928 Has header "setupapi.h" : NO 00:01:59.928 Has header "linux/blkzoned.h" : YES 00:01:59.928 Has header "linux/blkzoned.h" : YES (cached) 00:01:59.928 Has header "libaio.h" : YES 00:01:59.928 Library aio found: YES 00:01:59.928 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:59.928 Run-time dependency liburing found: YES 2.2 00:01:59.928 Dependency libvfn skipped: feature with-libvfn disabled 00:01:59.928 Found CMake: /usr/bin/cmake (3.27.7) 00:01:59.928 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:01:59.928 Subproject spdk : skipped: feature with-spdk disabled 00:01:59.928 Run-time dependency appleframeworks found: NO (tried framework) 00:01:59.928 Run-time dependency appleframeworks found: NO (tried framework) 00:01:59.928 Library rt found: YES 00:01:59.928 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:59.928 Configuring xnvme_config.h using configuration 00:01:59.928 Configuring xnvme.spec using configuration 00:01:59.928 Run-time dependency bash-completion found: YES 2.11 00:01:59.928 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:59.928 Program cp found: YES (/usr/bin/cp) 00:01:59.928 Build targets in project: 3 00:01:59.928 00:01:59.928 xnvme 0.7.5 00:01:59.928 00:01:59.928 Subprojects 00:01:59.928 spdk : NO Feature 'with-spdk' disabled 00:01:59.928 00:01:59.928 User defined options 00:01:59.928 examples : false 00:01:59.928 tests : false 00:01:59.928 tools : false 00:01:59.928 with-libaio : enabled 00:01:59.928 with-liburing: enabled 00:01:59.928 with-libvfn : disabled 00:01:59.928 with-spdk : disabled 00:01:59.928 00:01:59.928 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.209 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:00.209 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:00.469 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:00.469 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:00.469 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:00.469 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:00.469 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:00.469 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:00.469 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:00.469 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:00.469 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:00.469 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:00.469 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:00.469 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:00.469 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:00.469 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:00.469 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:00.469 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:00.469 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:00.469 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:00.469 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:00.469 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:00.728 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:00.728 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:00.728 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:00.728 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:00.728 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:00.728 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:00.728 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:00.728 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:00.728 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:00.728 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:00.728 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:00.728 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:00.728 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:00.728 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:00.729 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:00.729 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:00.729 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:00.729 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:00.729 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:00.729 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:00.729 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:00.729 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:00.729 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:00.729 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:00.729 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:00.729 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:00.729 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:00.729 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:00.729 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:00.729 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:00.729 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:00.729 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:00.729 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:00.988 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:00.988 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:00.988 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:00.988 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:00.988 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:00.988 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:00.988 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:00.988 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:00.988 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:00.988 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:00.988 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:00.988 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:00.988 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:00.988 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:00.988 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:00.988 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:00.988 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:01.247 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:01.247 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:01.505 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:01.505 [75/76] Linking static target lib/libxnvme.a 00:02:01.505 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:01.505 INFO: autodetecting backend as ninja 00:02:01.505 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:01.505 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:08.073 The Meson build system 00:02:08.073 Version: 1.5.0 00:02:08.073 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:08.073 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:08.073 Build type: native build 00:02:08.073 Program cat found: YES (/usr/bin/cat) 00:02:08.073 Project name: DPDK 00:02:08.073 Project version: 24.03.0 00:02:08.073 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.073 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.073 Host machine cpu family: x86_64 00:02:08.073 Host machine cpu: x86_64 00:02:08.073 Message: ## Building in Developer Mode ## 00:02:08.073 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.073 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:08.073 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.073 Program python3 found: YES (/usr/bin/python3) 00:02:08.073 Program cat found: YES (/usr/bin/cat) 00:02:08.073 Compiler for C supports arguments -march=native: YES 00:02:08.073 Checking for size of "void *" : 8 00:02:08.073 Checking for size of "void *" : 8 (cached) 00:02:08.073 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:08.073 Library m found: YES 00:02:08.073 Library numa found: YES 00:02:08.073 Has header "numaif.h" : YES 00:02:08.073 Library fdt found: NO 00:02:08.073 Library execinfo found: NO 00:02:08.073 Has header "execinfo.h" : YES 00:02:08.073 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.073 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.073 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.073 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.073 Run-time dependency openssl found: YES 3.1.1 00:02:08.073 Run-time dependency libpcap found: YES 1.10.4 00:02:08.073 Has header "pcap.h" with dependency libpcap: YES 00:02:08.073 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.073 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.073 Compiler for C supports arguments -Wformat: YES 00:02:08.073 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.073 Compiler for C supports arguments -Wformat-security: NO 00:02:08.073 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.073 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.073 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.073 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.073 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.073 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.073 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.073 Compiler for C supports arguments -Wundef: YES 00:02:08.073 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.073 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.073 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.073 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.073 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.073 Program objdump found: YES (/usr/bin/objdump) 00:02:08.073 Compiler for C supports arguments -mavx512f: YES 00:02:08.073 Checking if "AVX512 checking" compiles: YES 00:02:08.073 Fetching value of define "__SSE4_2__" : 1 00:02:08.073 Fetching value of define "__AES__" : 1 00:02:08.073 Fetching value of define "__AVX__" : 1 00:02:08.073 Fetching value of define "__AVX2__" : 1 00:02:08.073 Fetching value of define "__AVX512BW__" : 1 00:02:08.073 Fetching value of define "__AVX512CD__" : 1 00:02:08.073 Fetching value of define "__AVX512DQ__" : 1 00:02:08.073 Fetching value of define "__AVX512F__" : 1 00:02:08.073 Fetching value of define "__AVX512VL__" : 1 00:02:08.073 Fetching value of define "__PCLMUL__" : 1 00:02:08.073 Fetching value of define "__RDRND__" : 1 00:02:08.073 Fetching value of define "__RDSEED__" : 1 00:02:08.073 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.073 Fetching value of define "__znver1__" : (undefined) 00:02:08.073 Fetching value of define "__znver2__" : (undefined) 00:02:08.073 Fetching value of define "__znver3__" : (undefined) 00:02:08.073 Fetching value of define "__znver4__" : (undefined) 00:02:08.073 Library asan found: YES 00:02:08.073 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.073 Message: lib/log: Defining dependency "log" 00:02:08.073 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.073 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.073 Library rt found: YES 00:02:08.073 Checking for function "getentropy" : NO 00:02:08.073 Message: lib/eal: Defining dependency "eal" 00:02:08.073 Message: lib/ring: Defining dependency "ring" 00:02:08.073 Message: lib/rcu: Defining dependency "rcu" 00:02:08.073 Message: lib/mempool: Defining dependency "mempool" 00:02:08.073 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.073 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.073 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.073 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:08.073 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:08.073 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:08.073 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:08.073 Compiler for C supports arguments -mpclmul: YES 00:02:08.073 Compiler for C supports arguments -maes: YES 00:02:08.073 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.073 Compiler for C supports arguments -mavx512bw: YES 00:02:08.073 Compiler for C supports arguments -mavx512dq: YES 00:02:08.073 Compiler for C supports arguments -mavx512vl: YES 00:02:08.073 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.073 Compiler for C supports arguments -mavx2: YES 00:02:08.073 Compiler for C supports arguments -mavx: YES 00:02:08.073 Message: lib/net: Defining dependency "net" 00:02:08.073 Message: lib/meter: Defining dependency "meter" 00:02:08.073 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.073 Message: lib/pci: Defining dependency "pci" 00:02:08.073 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.073 Message: lib/hash: Defining dependency "hash" 00:02:08.073 Message: lib/timer: Defining dependency "timer" 00:02:08.073 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.073 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.073 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.073 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.073 Message: lib/power: Defining dependency "power" 00:02:08.073 Message: lib/reorder: Defining dependency "reorder" 00:02:08.073 Message: lib/security: Defining dependency "security" 00:02:08.073 Has header "linux/userfaultfd.h" : YES 00:02:08.073 Has header "linux/vduse.h" : YES 00:02:08.073 Message: lib/vhost: Defining dependency "vhost" 00:02:08.073 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.073 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.073 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.073 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.073 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:08.073 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:08.073 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:08.073 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:08.073 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:08.073 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:08.073 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:08.073 Configuring doxy-api-html.conf using configuration 00:02:08.073 Configuring doxy-api-man.conf using configuration 00:02:08.073 Program mandb found: YES (/usr/bin/mandb) 00:02:08.073 Program sphinx-build found: NO 00:02:08.073 Configuring rte_build_config.h using configuration 00:02:08.073 Message: 00:02:08.073 ================= 00:02:08.073 Applications Enabled 00:02:08.073 ================= 00:02:08.073 00:02:08.073 apps: 00:02:08.073 00:02:08.073 00:02:08.073 Message: 00:02:08.073 ================= 00:02:08.073 Libraries Enabled 00:02:08.073 ================= 00:02:08.073 00:02:08.073 libs: 00:02:08.073 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:08.073 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:08.073 cryptodev, dmadev, power, reorder, security, vhost, 00:02:08.073 00:02:08.073 Message: 00:02:08.073 =============== 00:02:08.073 Drivers Enabled 00:02:08.073 =============== 00:02:08.073 00:02:08.073 common: 00:02:08.073 00:02:08.073 bus: 00:02:08.073 pci, vdev, 00:02:08.073 mempool: 00:02:08.073 ring, 00:02:08.073 dma: 00:02:08.073 00:02:08.073 net: 00:02:08.073 00:02:08.073 crypto: 00:02:08.073 00:02:08.073 compress: 00:02:08.073 00:02:08.073 vdpa: 00:02:08.073 00:02:08.073 00:02:08.073 Message: 00:02:08.073 ================= 00:02:08.073 Content Skipped 00:02:08.073 ================= 00:02:08.073 00:02:08.073 apps: 00:02:08.073 dumpcap: explicitly disabled via build config 00:02:08.073 graph: explicitly disabled via build config 00:02:08.073 pdump: explicitly disabled via build config 00:02:08.074 proc-info: explicitly disabled via build config 00:02:08.074 test-acl: explicitly disabled via build config 00:02:08.074 test-bbdev: explicitly disabled via build config 00:02:08.074 test-cmdline: explicitly disabled via build config 00:02:08.074 test-compress-perf: explicitly disabled via build config 00:02:08.074 test-crypto-perf: explicitly disabled via build config 00:02:08.074 test-dma-perf: explicitly disabled via build config 00:02:08.074 test-eventdev: explicitly disabled via build config 00:02:08.074 test-fib: explicitly disabled via build config 00:02:08.074 test-flow-perf: explicitly disabled via build config 00:02:08.074 test-gpudev: explicitly disabled via build config 00:02:08.074 test-mldev: explicitly disabled via build config 00:02:08.074 test-pipeline: explicitly disabled via build config 00:02:08.074 test-pmd: explicitly disabled via build config 00:02:08.074 test-regex: explicitly disabled via build config 00:02:08.074 test-sad: explicitly disabled via build config 00:02:08.074 test-security-perf: explicitly disabled via build config 00:02:08.074 00:02:08.074 libs: 00:02:08.074 argparse: explicitly disabled via build config 00:02:08.074 metrics: explicitly disabled via build config 00:02:08.074 acl: explicitly disabled via build config 00:02:08.074 bbdev: explicitly disabled via build config 00:02:08.074 bitratestats: explicitly disabled via build config 00:02:08.074 bpf: explicitly disabled via build config 00:02:08.074 cfgfile: explicitly disabled via build config 00:02:08.074 distributor: explicitly disabled via build config 00:02:08.074 efd: explicitly disabled via build config 00:02:08.074 eventdev: explicitly disabled via build config 00:02:08.074 dispatcher: explicitly disabled via build config 00:02:08.074 gpudev: explicitly disabled via build config 00:02:08.074 gro: explicitly disabled via build config 00:02:08.074 gso: explicitly disabled via build config 00:02:08.074 ip_frag: explicitly disabled via build config 00:02:08.074 jobstats: explicitly disabled via build config 00:02:08.074 latencystats: explicitly disabled via build config 00:02:08.074 lpm: explicitly disabled via build config 00:02:08.074 member: explicitly disabled via build config 00:02:08.074 pcapng: explicitly disabled via build config 00:02:08.074 rawdev: explicitly disabled via build config 00:02:08.074 regexdev: explicitly disabled via build config 00:02:08.074 mldev: explicitly disabled via build config 00:02:08.074 rib: explicitly disabled via build config 00:02:08.074 sched: explicitly disabled via build config 00:02:08.074 stack: explicitly disabled via build config 00:02:08.074 ipsec: explicitly disabled via build config 00:02:08.074 pdcp: explicitly disabled via build config 00:02:08.074 fib: explicitly disabled via build config 00:02:08.074 port: explicitly disabled via build config 00:02:08.074 pdump: explicitly disabled via build config 00:02:08.074 table: explicitly disabled via build config 00:02:08.074 pipeline: explicitly disabled via build config 00:02:08.074 graph: explicitly disabled via build config 00:02:08.074 node: explicitly disabled via build config 00:02:08.074 00:02:08.074 drivers: 00:02:08.074 common/cpt: not in enabled drivers build config 00:02:08.074 common/dpaax: not in enabled drivers build config 00:02:08.074 common/iavf: not in enabled drivers build config 00:02:08.074 common/idpf: not in enabled drivers build config 00:02:08.074 common/ionic: not in enabled drivers build config 00:02:08.074 common/mvep: not in enabled drivers build config 00:02:08.074 common/octeontx: not in enabled drivers build config 00:02:08.074 bus/auxiliary: not in enabled drivers build config 00:02:08.074 bus/cdx: not in enabled drivers build config 00:02:08.074 bus/dpaa: not in enabled drivers build config 00:02:08.074 bus/fslmc: not in enabled drivers build config 00:02:08.074 bus/ifpga: not in enabled drivers build config 00:02:08.074 bus/platform: not in enabled drivers build config 00:02:08.074 bus/uacce: not in enabled drivers build config 00:02:08.074 bus/vmbus: not in enabled drivers build config 00:02:08.074 common/cnxk: not in enabled drivers build config 00:02:08.074 common/mlx5: not in enabled drivers build config 00:02:08.074 common/nfp: not in enabled drivers build config 00:02:08.074 common/nitrox: not in enabled drivers build config 00:02:08.074 common/qat: not in enabled drivers build config 00:02:08.074 common/sfc_efx: not in enabled drivers build config 00:02:08.074 mempool/bucket: not in enabled drivers build config 00:02:08.074 mempool/cnxk: not in enabled drivers build config 00:02:08.074 mempool/dpaa: not in enabled drivers build config 00:02:08.074 mempool/dpaa2: not in enabled drivers build config 00:02:08.074 mempool/octeontx: not in enabled drivers build config 00:02:08.074 mempool/stack: not in enabled drivers build config 00:02:08.074 dma/cnxk: not in enabled drivers build config 00:02:08.074 dma/dpaa: not in enabled drivers build config 00:02:08.074 dma/dpaa2: not in enabled drivers build config 00:02:08.074 dma/hisilicon: not in enabled drivers build config 00:02:08.074 dma/idxd: not in enabled drivers build config 00:02:08.074 dma/ioat: not in enabled drivers build config 00:02:08.074 dma/skeleton: not in enabled drivers build config 00:02:08.074 net/af_packet: not in enabled drivers build config 00:02:08.074 net/af_xdp: not in enabled drivers build config 00:02:08.074 net/ark: not in enabled drivers build config 00:02:08.074 net/atlantic: not in enabled drivers build config 00:02:08.074 net/avp: not in enabled drivers build config 00:02:08.074 net/axgbe: not in enabled drivers build config 00:02:08.074 net/bnx2x: not in enabled drivers build config 00:02:08.074 net/bnxt: not in enabled drivers build config 00:02:08.074 net/bonding: not in enabled drivers build config 00:02:08.074 net/cnxk: not in enabled drivers build config 00:02:08.074 net/cpfl: not in enabled drivers build config 00:02:08.074 net/cxgbe: not in enabled drivers build config 00:02:08.074 net/dpaa: not in enabled drivers build config 00:02:08.074 net/dpaa2: not in enabled drivers build config 00:02:08.074 net/e1000: not in enabled drivers build config 00:02:08.074 net/ena: not in enabled drivers build config 00:02:08.074 net/enetc: not in enabled drivers build config 00:02:08.074 net/enetfec: not in enabled drivers build config 00:02:08.074 net/enic: not in enabled drivers build config 00:02:08.074 net/failsafe: not in enabled drivers build config 00:02:08.074 net/fm10k: not in enabled drivers build config 00:02:08.074 net/gve: not in enabled drivers build config 00:02:08.074 net/hinic: not in enabled drivers build config 00:02:08.074 net/hns3: not in enabled drivers build config 00:02:08.074 net/i40e: not in enabled drivers build config 00:02:08.074 net/iavf: not in enabled drivers build config 00:02:08.074 net/ice: not in enabled drivers build config 00:02:08.074 net/idpf: not in enabled drivers build config 00:02:08.074 net/igc: not in enabled drivers build config 00:02:08.074 net/ionic: not in enabled drivers build config 00:02:08.074 net/ipn3ke: not in enabled drivers build config 00:02:08.074 net/ixgbe: not in enabled drivers build config 00:02:08.074 net/mana: not in enabled drivers build config 00:02:08.074 net/memif: not in enabled drivers build config 00:02:08.074 net/mlx4: not in enabled drivers build config 00:02:08.074 net/mlx5: not in enabled drivers build config 00:02:08.074 net/mvneta: not in enabled drivers build config 00:02:08.074 net/mvpp2: not in enabled drivers build config 00:02:08.074 net/netvsc: not in enabled drivers build config 00:02:08.074 net/nfb: not in enabled drivers build config 00:02:08.074 net/nfp: not in enabled drivers build config 00:02:08.074 net/ngbe: not in enabled drivers build config 00:02:08.074 net/null: not in enabled drivers build config 00:02:08.074 net/octeontx: not in enabled drivers build config 00:02:08.074 net/octeon_ep: not in enabled drivers build config 00:02:08.074 net/pcap: not in enabled drivers build config 00:02:08.074 net/pfe: not in enabled drivers build config 00:02:08.074 net/qede: not in enabled drivers build config 00:02:08.074 net/ring: not in enabled drivers build config 00:02:08.074 net/sfc: not in enabled drivers build config 00:02:08.074 net/softnic: not in enabled drivers build config 00:02:08.074 net/tap: not in enabled drivers build config 00:02:08.074 net/thunderx: not in enabled drivers build config 00:02:08.074 net/txgbe: not in enabled drivers build config 00:02:08.074 net/vdev_netvsc: not in enabled drivers build config 00:02:08.074 net/vhost: not in enabled drivers build config 00:02:08.074 net/virtio: not in enabled drivers build config 00:02:08.074 net/vmxnet3: not in enabled drivers build config 00:02:08.074 raw/*: missing internal dependency, "rawdev" 00:02:08.074 crypto/armv8: not in enabled drivers build config 00:02:08.074 crypto/bcmfs: not in enabled drivers build config 00:02:08.074 crypto/caam_jr: not in enabled drivers build config 00:02:08.074 crypto/ccp: not in enabled drivers build config 00:02:08.074 crypto/cnxk: not in enabled drivers build config 00:02:08.074 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.074 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.074 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.074 crypto/mlx5: not in enabled drivers build config 00:02:08.074 crypto/mvsam: not in enabled drivers build config 00:02:08.074 crypto/nitrox: not in enabled drivers build config 00:02:08.074 crypto/null: not in enabled drivers build config 00:02:08.074 crypto/octeontx: not in enabled drivers build config 00:02:08.074 crypto/openssl: not in enabled drivers build config 00:02:08.074 crypto/scheduler: not in enabled drivers build config 00:02:08.074 crypto/uadk: not in enabled drivers build config 00:02:08.074 crypto/virtio: not in enabled drivers build config 00:02:08.074 compress/isal: not in enabled drivers build config 00:02:08.074 compress/mlx5: not in enabled drivers build config 00:02:08.074 compress/nitrox: not in enabled drivers build config 00:02:08.074 compress/octeontx: not in enabled drivers build config 00:02:08.074 compress/zlib: not in enabled drivers build config 00:02:08.074 regex/*: missing internal dependency, "regexdev" 00:02:08.074 ml/*: missing internal dependency, "mldev" 00:02:08.075 vdpa/ifc: not in enabled drivers build config 00:02:08.075 vdpa/mlx5: not in enabled drivers build config 00:02:08.075 vdpa/nfp: not in enabled drivers build config 00:02:08.075 vdpa/sfc: not in enabled drivers build config 00:02:08.075 event/*: missing internal dependency, "eventdev" 00:02:08.075 baseband/*: missing internal dependency, "bbdev" 00:02:08.075 gpu/*: missing internal dependency, "gpudev" 00:02:08.075 00:02:08.075 00:02:08.667 Build targets in project: 85 00:02:08.667 00:02:08.667 DPDK 24.03.0 00:02:08.667 00:02:08.667 User defined options 00:02:08.667 buildtype : debug 00:02:08.667 default_library : shared 00:02:08.667 libdir : lib 00:02:08.667 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.667 b_sanitize : address 00:02:08.667 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:08.667 c_link_args : 00:02:08.667 cpu_instruction_set: native 00:02:08.667 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:08.667 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:08.667 enable_docs : false 00:02:08.667 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:08.667 enable_kmods : false 00:02:08.667 max_lcores : 128 00:02:08.667 tests : false 00:02:08.667 00:02:08.667 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.926 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:08.926 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:08.926 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.926 [3/268] Linking static target lib/librte_kvargs.a 00:02:08.926 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.185 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.185 [6/268] Linking static target lib/librte_log.a 00:02:09.444 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.444 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.444 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.444 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.444 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.444 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.444 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.444 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.444 [15/268] Linking static target lib/librte_telemetry.a 00:02:09.444 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.444 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.702 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.960 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.960 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.960 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.960 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.960 [23/268] Linking target lib/librte_log.so.24.1 00:02:10.219 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.219 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.219 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.219 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.219 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.219 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:10.219 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.219 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.219 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:10.219 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.478 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.478 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:10.478 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.478 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.736 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.736 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.736 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.736 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.736 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.736 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.736 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.736 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.994 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.994 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.994 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.994 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.994 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.250 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.250 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.250 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.508 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.508 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.508 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.508 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.508 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.508 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.765 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.765 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.765 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.765 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.765 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.765 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.023 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.023 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:12.023 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.023 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.280 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:12.280 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.280 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.280 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.280 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.280 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.537 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.537 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.537 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.537 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.794 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.794 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.794 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.794 [83/268] Linking static target lib/librte_ring.a 00:02:12.794 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.794 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:13.052 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.052 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.052 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.052 [89/268] Linking static target lib/librte_eal.a 00:02:13.052 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.052 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.052 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.052 [93/268] Linking static target lib/librte_rcu.a 00:02:13.052 [94/268] Linking static target lib/librte_mempool.a 00:02:13.310 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.310 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.310 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.567 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.567 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.567 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.567 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.567 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.567 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.567 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.824 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.824 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.824 [107/268] Linking static target lib/librte_net.a 00:02:14.082 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.082 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.082 [110/268] Linking static target lib/librte_mbuf.a 00:02:14.082 [111/268] Linking static target lib/librte_meter.a 00:02:14.082 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.082 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.082 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.340 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.340 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.340 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.340 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.598 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:14.856 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:14.856 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:14.856 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.120 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.120 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.120 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.120 [126/268] Linking static target lib/librte_pci.a 00:02:15.120 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.379 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.379 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.379 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.379 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.379 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.379 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.379 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:15.379 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.637 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.637 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.637 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.637 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:15.637 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.637 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:15.637 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:15.637 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.637 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:15.637 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.896 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.896 [147/268] Linking static target lib/librte_cmdline.a 00:02:15.896 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.896 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.155 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.155 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.155 [152/268] Linking static target lib/librte_timer.a 00:02:16.155 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.414 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.414 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.414 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.414 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.673 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:16.673 [159/268] Linking static target lib/librte_ethdev.a 00:02:16.673 [160/268] Linking static target lib/librte_compressdev.a 00:02:16.673 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:16.673 [162/268] Linking static target lib/librte_hash.a 00:02:16.673 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:16.931 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.931 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:16.931 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:16.931 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:16.931 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:16.931 [169/268] Linking static target lib/librte_dmadev.a 00:02:17.190 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:17.190 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:17.449 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:17.449 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.449 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.449 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:17.707 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:17.707 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:17.707 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:17.707 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.965 [180/268] Linking static target lib/librte_cryptodev.a 00:02:17.965 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.965 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.965 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:17.965 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:17.965 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:17.965 [186/268] Linking static target lib/librte_power.a 00:02:18.223 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.482 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:18.483 [189/268] Linking static target lib/librte_reorder.a 00:02:18.483 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.483 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.483 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.483 [193/268] Linking static target lib/librte_security.a 00:02:18.741 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:18.998 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.257 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.257 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:19.257 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.257 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:19.257 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.516 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:19.516 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:19.516 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:19.775 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:19.775 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:19.775 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.033 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:20.033 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.033 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.033 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.292 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.292 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.292 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.292 [214/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.292 [215/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.292 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.292 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.292 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.292 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.292 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:20.292 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:20.550 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.550 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.550 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.550 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:20.808 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.808 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.780 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.082 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.082 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.082 [231/268] Linking target lib/librte_eal.so.24.1 00:02:25.082 [232/268] Linking static target lib/librte_vhost.a 00:02:25.341 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.341 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:25.341 [235/268] Linking target lib/librte_timer.so.24.1 00:02:25.341 [236/268] Linking target lib/librte_ring.so.24.1 00:02:25.341 [237/268] Linking target lib/librte_meter.so.24.1 00:02:25.341 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.341 [239/268] Linking target lib/librte_pci.so.24.1 00:02:25.341 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.341 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.341 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.341 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.341 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.341 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:25.341 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:25.341 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.601 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.601 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.601 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.601 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.601 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.861 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.861 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.861 [255/268] Linking target lib/librte_net.so.24.1 00:02:25.861 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:25.861 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:25.861 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.861 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.119 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:26.119 [261/268] Linking target lib/librte_hash.so.24.1 00:02:26.119 [262/268] Linking target lib/librte_security.so.24.1 00:02:26.119 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:26.119 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.119 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.378 [266/268] Linking target lib/librte_power.so.24.1 00:02:27.315 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.315 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:27.315 INFO: autodetecting backend as ninja 00:02:27.315 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:45.424 CC lib/log/log.o 00:02:45.424 CC lib/log/log_deprecated.o 00:02:45.424 CC lib/ut/ut.o 00:02:45.424 CC lib/log/log_flags.o 00:02:45.424 CC lib/ut_mock/mock.o 00:02:45.424 LIB libspdk_ut_mock.a 00:02:45.424 LIB libspdk_log.a 00:02:45.424 LIB libspdk_ut.a 00:02:45.424 SO libspdk_ut_mock.so.6.0 00:02:45.424 SO libspdk_log.so.7.1 00:02:45.424 SO libspdk_ut.so.2.0 00:02:45.424 SYMLINK libspdk_ut_mock.so 00:02:45.424 SYMLINK libspdk_log.so 00:02:45.424 SYMLINK libspdk_ut.so 00:02:45.424 CC lib/dma/dma.o 00:02:45.424 CC lib/ioat/ioat.o 00:02:45.424 CXX lib/trace_parser/trace.o 00:02:45.424 CC lib/util/base64.o 00:02:45.424 CC lib/util/crc16.o 00:02:45.424 CC lib/util/bit_array.o 00:02:45.424 CC lib/util/cpuset.o 00:02:45.424 CC lib/util/crc32.o 00:02:45.424 CC lib/util/crc32c.o 00:02:45.424 CC lib/vfio_user/host/vfio_user_pci.o 00:02:45.424 CC lib/util/crc32_ieee.o 00:02:45.424 CC lib/util/crc64.o 00:02:45.424 CC lib/util/dif.o 00:02:45.424 CC lib/vfio_user/host/vfio_user.o 00:02:45.424 LIB libspdk_dma.a 00:02:45.424 SO libspdk_dma.so.5.0 00:02:45.424 CC lib/util/fd.o 00:02:45.424 CC lib/util/fd_group.o 00:02:45.424 SYMLINK libspdk_dma.so 00:02:45.424 CC lib/util/file.o 00:02:45.424 CC lib/util/hexlify.o 00:02:45.424 CC lib/util/iov.o 00:02:45.424 LIB libspdk_ioat.a 00:02:45.424 SO libspdk_ioat.so.7.0 00:02:45.424 CC lib/util/math.o 00:02:45.424 CC lib/util/net.o 00:02:45.424 SYMLINK libspdk_ioat.so 00:02:45.424 LIB libspdk_vfio_user.a 00:02:45.424 CC lib/util/pipe.o 00:02:45.424 CC lib/util/strerror_tls.o 00:02:45.424 CC lib/util/string.o 00:02:45.424 SO libspdk_vfio_user.so.5.0 00:02:45.424 CC lib/util/uuid.o 00:02:45.424 SYMLINK libspdk_vfio_user.so 00:02:45.424 CC lib/util/xor.o 00:02:45.424 CC lib/util/zipf.o 00:02:45.424 CC lib/util/md5.o 00:02:45.424 LIB libspdk_util.a 00:02:45.424 LIB libspdk_trace_parser.a 00:02:45.424 SO libspdk_util.so.10.1 00:02:45.424 SO libspdk_trace_parser.so.6.0 00:02:45.424 SYMLINK libspdk_util.so 00:02:45.424 SYMLINK libspdk_trace_parser.so 00:02:45.424 CC lib/vmd/led.o 00:02:45.424 CC lib/vmd/vmd.o 00:02:45.424 CC lib/json/json_parse.o 00:02:45.424 CC lib/json/json_write.o 00:02:45.424 CC lib/json/json_util.o 00:02:45.424 CC lib/idxd/idxd_user.o 00:02:45.424 CC lib/idxd/idxd.o 00:02:45.424 CC lib/env_dpdk/env.o 00:02:45.424 CC lib/rdma_utils/rdma_utils.o 00:02:45.424 CC lib/conf/conf.o 00:02:45.424 CC lib/idxd/idxd_kernel.o 00:02:45.424 CC lib/env_dpdk/memory.o 00:02:45.424 CC lib/env_dpdk/pci.o 00:02:45.424 CC lib/env_dpdk/init.o 00:02:45.424 LIB libspdk_conf.a 00:02:45.424 SO libspdk_conf.so.6.0 00:02:45.424 LIB libspdk_json.a 00:02:45.424 LIB libspdk_rdma_utils.a 00:02:45.424 CC lib/env_dpdk/threads.o 00:02:45.424 SO libspdk_rdma_utils.so.1.0 00:02:45.424 SO libspdk_json.so.6.0 00:02:45.424 SYMLINK libspdk_conf.so 00:02:45.424 CC lib/env_dpdk/pci_ioat.o 00:02:45.424 SYMLINK libspdk_rdma_utils.so 00:02:45.424 CC lib/env_dpdk/pci_virtio.o 00:02:45.424 SYMLINK libspdk_json.so 00:02:45.424 CC lib/env_dpdk/pci_vmd.o 00:02:45.424 CC lib/env_dpdk/pci_idxd.o 00:02:45.424 CC lib/env_dpdk/pci_event.o 00:02:45.424 CC lib/env_dpdk/sigbus_handler.o 00:02:45.424 CC lib/env_dpdk/pci_dpdk.o 00:02:45.424 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:45.424 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:45.424 LIB libspdk_idxd.a 00:02:45.684 SO libspdk_idxd.so.12.1 00:02:45.684 SYMLINK libspdk_idxd.so 00:02:45.684 LIB libspdk_vmd.a 00:02:45.684 CC lib/rdma_provider/common.o 00:02:45.684 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:45.684 SO libspdk_vmd.so.6.0 00:02:45.684 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.684 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.684 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.684 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.684 SYMLINK libspdk_vmd.so 00:02:45.943 LIB libspdk_rdma_provider.a 00:02:45.943 SO libspdk_rdma_provider.so.7.0 00:02:45.943 LIB libspdk_jsonrpc.a 00:02:45.943 SYMLINK libspdk_rdma_provider.so 00:02:45.943 SO libspdk_jsonrpc.so.6.0 00:02:46.205 SYMLINK libspdk_jsonrpc.so 00:02:46.464 LIB libspdk_env_dpdk.a 00:02:46.464 SO libspdk_env_dpdk.so.15.1 00:02:46.464 CC lib/rpc/rpc.o 00:02:46.723 SYMLINK libspdk_env_dpdk.so 00:02:46.723 LIB libspdk_rpc.a 00:02:46.723 SO libspdk_rpc.so.6.0 00:02:46.982 SYMLINK libspdk_rpc.so 00:02:47.241 CC lib/trace/trace_flags.o 00:02:47.241 CC lib/trace/trace.o 00:02:47.241 CC lib/trace/trace_rpc.o 00:02:47.241 CC lib/notify/notify.o 00:02:47.241 CC lib/notify/notify_rpc.o 00:02:47.241 CC lib/keyring/keyring.o 00:02:47.241 CC lib/keyring/keyring_rpc.o 00:02:47.499 LIB libspdk_notify.a 00:02:47.500 SO libspdk_notify.so.6.0 00:02:47.500 LIB libspdk_trace.a 00:02:47.500 LIB libspdk_keyring.a 00:02:47.500 SYMLINK libspdk_notify.so 00:02:47.500 SO libspdk_keyring.so.2.0 00:02:47.500 SO libspdk_trace.so.11.0 00:02:47.758 SYMLINK libspdk_keyring.so 00:02:47.758 SYMLINK libspdk_trace.so 00:02:48.017 CC lib/sock/sock.o 00:02:48.017 CC lib/thread/thread.o 00:02:48.017 CC lib/sock/sock_rpc.o 00:02:48.017 CC lib/thread/iobuf.o 00:02:48.586 LIB libspdk_sock.a 00:02:48.586 SO libspdk_sock.so.10.0 00:02:48.846 SYMLINK libspdk_sock.so 00:02:49.104 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:49.104 CC lib/nvme/nvme_ctrlr.o 00:02:49.104 CC lib/nvme/nvme_ns_cmd.o 00:02:49.104 CC lib/nvme/nvme_ns.o 00:02:49.104 CC lib/nvme/nvme_fabric.o 00:02:49.104 CC lib/nvme/nvme_pcie_common.o 00:02:49.104 CC lib/nvme/nvme_pcie.o 00:02:49.104 CC lib/nvme/nvme_qpair.o 00:02:49.104 CC lib/nvme/nvme.o 00:02:49.669 LIB libspdk_thread.a 00:02:49.669 SO libspdk_thread.so.11.0 00:02:49.928 CC lib/nvme/nvme_quirks.o 00:02:49.928 CC lib/nvme/nvme_transport.o 00:02:49.928 SYMLINK libspdk_thread.so 00:02:49.928 CC lib/nvme/nvme_discovery.o 00:02:49.928 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:49.928 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:50.187 CC lib/nvme/nvme_tcp.o 00:02:50.187 CC lib/accel/accel.o 00:02:50.187 CC lib/nvme/nvme_opal.o 00:02:50.445 CC lib/blob/blobstore.o 00:02:50.445 CC lib/blob/request.o 00:02:50.445 CC lib/blob/zeroes.o 00:02:50.702 CC lib/init/json_config.o 00:02:50.702 CC lib/virtio/virtio.o 00:02:50.702 CC lib/fsdev/fsdev.o 00:02:50.702 CC lib/fsdev/fsdev_io.o 00:02:50.702 CC lib/fsdev/fsdev_rpc.o 00:02:50.702 CC lib/init/subsystem.o 00:02:50.960 CC lib/virtio/virtio_vhost_user.o 00:02:50.960 CC lib/accel/accel_rpc.o 00:02:50.960 CC lib/accel/accel_sw.o 00:02:50.960 CC lib/init/subsystem_rpc.o 00:02:50.960 CC lib/init/rpc.o 00:02:50.960 CC lib/virtio/virtio_vfio_user.o 00:02:51.218 CC lib/nvme/nvme_io_msg.o 00:02:51.218 LIB libspdk_init.a 00:02:51.218 CC lib/blob/blob_bs_dev.o 00:02:51.218 SO libspdk_init.so.6.0 00:02:51.218 CC lib/virtio/virtio_pci.o 00:02:51.218 LIB libspdk_accel.a 00:02:51.218 LIB libspdk_fsdev.a 00:02:51.218 SO libspdk_accel.so.16.0 00:02:51.218 SO libspdk_fsdev.so.2.0 00:02:51.218 SYMLINK libspdk_init.so 00:02:51.218 CC lib/nvme/nvme_poll_group.o 00:02:51.519 SYMLINK libspdk_fsdev.so 00:02:51.519 CC lib/nvme/nvme_zns.o 00:02:51.519 SYMLINK libspdk_accel.so 00:02:51.519 CC lib/nvme/nvme_stubs.o 00:02:51.519 CC lib/nvme/nvme_auth.o 00:02:51.519 CC lib/event/app.o 00:02:51.519 CC lib/event/reactor.o 00:02:51.519 LIB libspdk_virtio.a 00:02:51.519 CC lib/nvme/nvme_cuse.o 00:02:51.519 SO libspdk_virtio.so.7.0 00:02:51.780 SYMLINK libspdk_virtio.so 00:02:51.780 CC lib/event/log_rpc.o 00:02:51.780 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:51.780 CC lib/event/app_rpc.o 00:02:51.780 CC lib/nvme/nvme_rdma.o 00:02:51.780 CC lib/event/scheduler_static.o 00:02:52.040 CC lib/bdev/bdev_rpc.o 00:02:52.040 CC lib/bdev/bdev.o 00:02:52.040 CC lib/bdev/bdev_zone.o 00:02:52.040 CC lib/bdev/part.o 00:02:52.040 LIB libspdk_event.a 00:02:52.040 SO libspdk_event.so.14.0 00:02:52.299 SYMLINK libspdk_event.so 00:02:52.299 CC lib/bdev/scsi_nvme.o 00:02:52.557 LIB libspdk_fuse_dispatcher.a 00:02:52.557 SO libspdk_fuse_dispatcher.so.1.0 00:02:52.557 SYMLINK libspdk_fuse_dispatcher.so 00:02:53.493 LIB libspdk_nvme.a 00:02:53.493 SO libspdk_nvme.so.15.0 00:02:53.752 SYMLINK libspdk_nvme.so 00:02:54.011 LIB libspdk_blob.a 00:02:54.011 SO libspdk_blob.so.12.0 00:02:54.270 SYMLINK libspdk_blob.so 00:02:54.529 CC lib/blobfs/blobfs.o 00:02:54.529 CC lib/blobfs/tree.o 00:02:54.529 CC lib/lvol/lvol.o 00:02:55.096 LIB libspdk_bdev.a 00:02:55.096 SO libspdk_bdev.so.17.0 00:02:55.355 SYMLINK libspdk_bdev.so 00:02:55.355 LIB libspdk_blobfs.a 00:02:55.355 SO libspdk_blobfs.so.11.0 00:02:55.355 CC lib/nbd/nbd.o 00:02:55.355 CC lib/nbd/nbd_rpc.o 00:02:55.355 CC lib/ftl/ftl_core.o 00:02:55.355 CC lib/ftl/ftl_init.o 00:02:55.355 CC lib/ftl/ftl_layout.o 00:02:55.355 CC lib/nvmf/ctrlr.o 00:02:55.614 CC lib/scsi/dev.o 00:02:55.614 CC lib/ublk/ublk.o 00:02:55.614 SYMLINK libspdk_blobfs.so 00:02:55.614 CC lib/nvmf/ctrlr_discovery.o 00:02:55.614 LIB libspdk_lvol.a 00:02:55.614 SO libspdk_lvol.so.11.0 00:02:55.614 SYMLINK libspdk_lvol.so 00:02:55.614 CC lib/ublk/ublk_rpc.o 00:02:55.614 CC lib/ftl/ftl_debug.o 00:02:55.614 CC lib/nvmf/ctrlr_bdev.o 00:02:55.614 CC lib/scsi/lun.o 00:02:55.872 CC lib/scsi/port.o 00:02:55.872 CC lib/ftl/ftl_io.o 00:02:55.872 CC lib/ftl/ftl_sb.o 00:02:55.872 CC lib/ftl/ftl_l2p.o 00:02:55.872 LIB libspdk_nbd.a 00:02:55.872 SO libspdk_nbd.so.7.0 00:02:55.872 CC lib/nvmf/subsystem.o 00:02:55.872 SYMLINK libspdk_nbd.so 00:02:55.872 CC lib/nvmf/nvmf.o 00:02:56.130 CC lib/nvmf/nvmf_rpc.o 00:02:56.130 CC lib/scsi/scsi.o 00:02:56.130 CC lib/scsi/scsi_bdev.o 00:02:56.130 CC lib/scsi/scsi_pr.o 00:02:56.130 CC lib/ftl/ftl_l2p_flat.o 00:02:56.130 LIB libspdk_ublk.a 00:02:56.130 CC lib/ftl/ftl_nv_cache.o 00:02:56.131 SO libspdk_ublk.so.3.0 00:02:56.388 SYMLINK libspdk_ublk.so 00:02:56.388 CC lib/nvmf/transport.o 00:02:56.388 CC lib/scsi/scsi_rpc.o 00:02:56.388 CC lib/scsi/task.o 00:02:56.388 CC lib/ftl/ftl_band.o 00:02:56.388 CC lib/ftl/ftl_band_ops.o 00:02:56.646 CC lib/ftl/ftl_writer.o 00:02:56.646 LIB libspdk_scsi.a 00:02:56.646 SO libspdk_scsi.so.9.0 00:02:56.904 SYMLINK libspdk_scsi.so 00:02:56.904 CC lib/nvmf/tcp.o 00:02:56.904 CC lib/ftl/ftl_rq.o 00:02:56.904 CC lib/ftl/ftl_reloc.o 00:02:56.904 CC lib/ftl/ftl_l2p_cache.o 00:02:56.904 CC lib/nvmf/stubs.o 00:02:56.904 CC lib/ftl/ftl_p2l.o 00:02:57.162 CC lib/vhost/vhost.o 00:02:57.162 CC lib/iscsi/conn.o 00:02:57.162 CC lib/iscsi/init_grp.o 00:02:57.162 CC lib/iscsi/iscsi.o 00:02:57.162 CC lib/iscsi/param.o 00:02:57.423 CC lib/iscsi/portal_grp.o 00:02:57.423 CC lib/iscsi/tgt_node.o 00:02:57.423 CC lib/iscsi/iscsi_subsystem.o 00:02:57.423 CC lib/ftl/ftl_p2l_log.o 00:02:57.423 CC lib/vhost/vhost_rpc.o 00:02:57.680 CC lib/vhost/vhost_scsi.o 00:02:57.680 CC lib/ftl/mngt/ftl_mngt.o 00:02:57.680 CC lib/vhost/vhost_blk.o 00:02:57.938 CC lib/iscsi/iscsi_rpc.o 00:02:57.938 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:57.938 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:57.938 CC lib/iscsi/task.o 00:02:58.196 CC lib/vhost/rte_vhost_user.o 00:02:58.196 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:58.196 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:58.196 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:58.196 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:58.196 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:58.196 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:58.454 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:58.454 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:58.454 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:58.454 CC lib/nvmf/mdns_server.o 00:02:58.454 CC lib/nvmf/rdma.o 00:02:58.454 CC lib/nvmf/auth.o 00:02:58.454 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:58.712 CC lib/ftl/utils/ftl_conf.o 00:02:58.712 CC lib/ftl/utils/ftl_md.o 00:02:58.712 LIB libspdk_iscsi.a 00:02:58.712 CC lib/ftl/utils/ftl_mempool.o 00:02:58.712 CC lib/ftl/utils/ftl_bitmap.o 00:02:58.712 SO libspdk_iscsi.so.8.0 00:02:58.712 CC lib/ftl/utils/ftl_property.o 00:02:58.712 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:58.970 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:58.970 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:58.970 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:58.970 SYMLINK libspdk_iscsi.so 00:02:58.970 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:58.970 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:58.970 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:58.970 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:59.229 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:59.229 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:59.229 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:59.229 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:59.229 LIB libspdk_vhost.a 00:02:59.229 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:59.229 SO libspdk_vhost.so.8.0 00:02:59.229 CC lib/ftl/base/ftl_base_dev.o 00:02:59.229 CC lib/ftl/base/ftl_base_bdev.o 00:02:59.229 CC lib/ftl/ftl_trace.o 00:02:59.229 SYMLINK libspdk_vhost.so 00:02:59.487 LIB libspdk_ftl.a 00:02:59.745 SO libspdk_ftl.so.9.0 00:03:00.312 SYMLINK libspdk_ftl.so 00:03:00.879 LIB libspdk_nvmf.a 00:03:00.879 SO libspdk_nvmf.so.20.0 00:03:01.138 SYMLINK libspdk_nvmf.so 00:03:01.705 CC module/env_dpdk/env_dpdk_rpc.o 00:03:01.705 CC module/accel/ioat/accel_ioat.o 00:03:01.705 CC module/blob/bdev/blob_bdev.o 00:03:01.705 CC module/accel/dsa/accel_dsa.o 00:03:01.705 CC module/accel/error/accel_error.o 00:03:01.705 CC module/fsdev/aio/fsdev_aio.o 00:03:01.705 CC module/sock/posix/posix.o 00:03:01.705 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:01.705 CC module/keyring/file/keyring.o 00:03:01.705 CC module/accel/iaa/accel_iaa.o 00:03:01.705 LIB libspdk_env_dpdk_rpc.a 00:03:01.705 SO libspdk_env_dpdk_rpc.so.6.0 00:03:01.963 SYMLINK libspdk_env_dpdk_rpc.so 00:03:01.963 CC module/accel/ioat/accel_ioat_rpc.o 00:03:01.963 CC module/keyring/file/keyring_rpc.o 00:03:01.963 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:01.963 LIB libspdk_scheduler_dynamic.a 00:03:01.963 CC module/accel/error/accel_error_rpc.o 00:03:01.963 CC module/accel/iaa/accel_iaa_rpc.o 00:03:01.963 SO libspdk_scheduler_dynamic.so.4.0 00:03:01.963 LIB libspdk_accel_ioat.a 00:03:01.963 LIB libspdk_blob_bdev.a 00:03:01.963 SO libspdk_accel_ioat.so.6.0 00:03:01.963 SO libspdk_blob_bdev.so.12.0 00:03:01.963 SYMLINK libspdk_scheduler_dynamic.so 00:03:01.963 CC module/accel/dsa/accel_dsa_rpc.o 00:03:01.963 LIB libspdk_keyring_file.a 00:03:01.963 CC module/fsdev/aio/linux_aio_mgr.o 00:03:01.963 SO libspdk_keyring_file.so.2.0 00:03:01.963 LIB libspdk_accel_error.a 00:03:01.963 SYMLINK libspdk_accel_ioat.so 00:03:01.963 SYMLINK libspdk_blob_bdev.so 00:03:01.963 LIB libspdk_accel_iaa.a 00:03:02.222 SO libspdk_accel_error.so.2.0 00:03:02.222 SO libspdk_accel_iaa.so.3.0 00:03:02.222 SYMLINK libspdk_keyring_file.so 00:03:02.222 LIB libspdk_accel_dsa.a 00:03:02.222 SYMLINK libspdk_accel_error.so 00:03:02.222 SO libspdk_accel_dsa.so.5.0 00:03:02.222 SYMLINK libspdk_accel_iaa.so 00:03:02.222 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:02.222 SYMLINK libspdk_accel_dsa.so 00:03:02.222 CC module/keyring/linux/keyring.o 00:03:02.479 CC module/bdev/delay/vbdev_delay.o 00:03:02.479 CC module/bdev/error/vbdev_error.o 00:03:02.479 LIB libspdk_scheduler_dpdk_governor.a 00:03:02.479 CC module/blobfs/bdev/blobfs_bdev.o 00:03:02.479 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:02.479 CC module/scheduler/gscheduler/gscheduler.o 00:03:02.479 LIB libspdk_fsdev_aio.a 00:03:02.479 CC module/bdev/gpt/gpt.o 00:03:02.479 CC module/bdev/lvol/vbdev_lvol.o 00:03:02.479 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:02.479 SO libspdk_fsdev_aio.so.1.0 00:03:02.479 CC module/bdev/gpt/vbdev_gpt.o 00:03:02.479 CC module/keyring/linux/keyring_rpc.o 00:03:02.479 LIB libspdk_sock_posix.a 00:03:02.479 SO libspdk_sock_posix.so.6.0 00:03:02.479 SYMLINK libspdk_fsdev_aio.so 00:03:02.479 LIB libspdk_scheduler_gscheduler.a 00:03:02.479 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:02.479 SO libspdk_scheduler_gscheduler.so.4.0 00:03:02.737 LIB libspdk_keyring_linux.a 00:03:02.737 SYMLINK libspdk_sock_posix.so 00:03:02.737 CC module/bdev/error/vbdev_error_rpc.o 00:03:02.737 SYMLINK libspdk_scheduler_gscheduler.so 00:03:02.737 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:02.737 SO libspdk_keyring_linux.so.1.0 00:03:02.737 SYMLINK libspdk_keyring_linux.so 00:03:02.737 LIB libspdk_blobfs_bdev.a 00:03:02.737 LIB libspdk_bdev_gpt.a 00:03:02.737 CC module/bdev/malloc/bdev_malloc.o 00:03:02.737 SO libspdk_blobfs_bdev.so.6.0 00:03:02.737 SO libspdk_bdev_gpt.so.6.0 00:03:02.737 LIB libspdk_bdev_error.a 00:03:02.737 LIB libspdk_bdev_delay.a 00:03:02.737 CC module/bdev/null/bdev_null.o 00:03:02.737 CC module/bdev/nvme/bdev_nvme.o 00:03:02.737 SO libspdk_bdev_delay.so.6.0 00:03:02.737 SO libspdk_bdev_error.so.6.0 00:03:02.737 SYMLINK libspdk_blobfs_bdev.so 00:03:02.737 SYMLINK libspdk_bdev_gpt.so 00:03:02.737 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:02.737 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:02.995 CC module/bdev/passthru/vbdev_passthru.o 00:03:02.995 CC module/bdev/raid/bdev_raid.o 00:03:02.995 SYMLINK libspdk_bdev_delay.so 00:03:02.995 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:02.995 SYMLINK libspdk_bdev_error.so 00:03:02.995 CC module/bdev/nvme/nvme_rpc.o 00:03:02.995 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:02.995 CC module/bdev/nvme/bdev_mdns_client.o 00:03:02.995 CC module/bdev/null/bdev_null_rpc.o 00:03:02.995 CC module/bdev/nvme/vbdev_opal.o 00:03:03.252 CC module/bdev/raid/bdev_raid_rpc.o 00:03:03.252 LIB libspdk_bdev_malloc.a 00:03:03.252 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:03.252 SO libspdk_bdev_malloc.so.6.0 00:03:03.252 LIB libspdk_bdev_passthru.a 00:03:03.252 LIB libspdk_bdev_null.a 00:03:03.252 SYMLINK libspdk_bdev_malloc.so 00:03:03.252 SO libspdk_bdev_passthru.so.6.0 00:03:03.252 SO libspdk_bdev_null.so.6.0 00:03:03.252 SYMLINK libspdk_bdev_passthru.so 00:03:03.252 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:03.252 SYMLINK libspdk_bdev_null.so 00:03:03.252 CC module/bdev/raid/bdev_raid_sb.o 00:03:03.252 LIB libspdk_bdev_lvol.a 00:03:03.252 CC module/bdev/raid/raid0.o 00:03:03.252 CC module/bdev/raid/raid1.o 00:03:03.510 CC module/bdev/raid/concat.o 00:03:03.510 SO libspdk_bdev_lvol.so.6.0 00:03:03.510 CC module/bdev/split/vbdev_split.o 00:03:03.510 SYMLINK libspdk_bdev_lvol.so 00:03:03.510 CC module/bdev/split/vbdev_split_rpc.o 00:03:03.771 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:03.771 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:03.771 CC module/bdev/xnvme/bdev_xnvme.o 00:03:03.772 CC module/bdev/aio/bdev_aio.o 00:03:03.772 LIB libspdk_bdev_split.a 00:03:03.772 CC module/bdev/ftl/bdev_ftl.o 00:03:03.772 CC module/bdev/iscsi/bdev_iscsi.o 00:03:03.772 SO libspdk_bdev_split.so.6.0 00:03:03.772 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:03.772 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:03.772 SYMLINK libspdk_bdev_split.so 00:03:03.772 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:04.030 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:04.030 LIB libspdk_bdev_zone_block.a 00:03:04.030 LIB libspdk_bdev_raid.a 00:03:04.030 SO libspdk_bdev_zone_block.so.6.0 00:03:04.030 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:04.030 CC module/bdev/aio/bdev_aio_rpc.o 00:03:04.030 SO libspdk_bdev_raid.so.6.0 00:03:04.030 LIB libspdk_bdev_xnvme.a 00:03:04.030 LIB libspdk_bdev_ftl.a 00:03:04.030 SYMLINK libspdk_bdev_zone_block.so 00:03:04.030 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:04.030 SO libspdk_bdev_xnvme.so.3.0 00:03:04.030 SO libspdk_bdev_ftl.so.6.0 00:03:04.288 SYMLINK libspdk_bdev_raid.so 00:03:04.288 LIB libspdk_bdev_iscsi.a 00:03:04.288 SYMLINK libspdk_bdev_xnvme.so 00:03:04.288 SYMLINK libspdk_bdev_ftl.so 00:03:04.288 LIB libspdk_bdev_aio.a 00:03:04.288 SO libspdk_bdev_iscsi.so.6.0 00:03:04.288 SO libspdk_bdev_aio.so.6.0 00:03:04.288 SYMLINK libspdk_bdev_iscsi.so 00:03:04.288 SYMLINK libspdk_bdev_aio.so 00:03:04.546 LIB libspdk_bdev_virtio.a 00:03:04.546 SO libspdk_bdev_virtio.so.6.0 00:03:04.546 SYMLINK libspdk_bdev_virtio.so 00:03:05.922 LIB libspdk_bdev_nvme.a 00:03:05.922 SO libspdk_bdev_nvme.so.7.1 00:03:05.922 SYMLINK libspdk_bdev_nvme.so 00:03:06.489 CC module/event/subsystems/vmd/vmd.o 00:03:06.489 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:06.489 CC module/event/subsystems/keyring/keyring.o 00:03:06.489 CC module/event/subsystems/sock/sock.o 00:03:06.489 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:06.489 CC module/event/subsystems/fsdev/fsdev.o 00:03:06.489 CC module/event/subsystems/iobuf/iobuf.o 00:03:06.489 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:06.489 CC module/event/subsystems/scheduler/scheduler.o 00:03:06.746 LIB libspdk_event_vhost_blk.a 00:03:06.746 LIB libspdk_event_vmd.a 00:03:06.746 LIB libspdk_event_sock.a 00:03:06.746 LIB libspdk_event_keyring.a 00:03:06.746 LIB libspdk_event_fsdev.a 00:03:06.746 LIB libspdk_event_scheduler.a 00:03:06.746 LIB libspdk_event_iobuf.a 00:03:06.746 SO libspdk_event_vhost_blk.so.3.0 00:03:06.746 SO libspdk_event_sock.so.5.0 00:03:06.746 SO libspdk_event_vmd.so.6.0 00:03:06.746 SO libspdk_event_keyring.so.1.0 00:03:06.746 SO libspdk_event_fsdev.so.1.0 00:03:06.746 SO libspdk_event_scheduler.so.4.0 00:03:06.746 SO libspdk_event_iobuf.so.3.0 00:03:06.746 SYMLINK libspdk_event_vhost_blk.so 00:03:06.746 SYMLINK libspdk_event_sock.so 00:03:06.746 SYMLINK libspdk_event_keyring.so 00:03:06.746 SYMLINK libspdk_event_fsdev.so 00:03:06.746 SYMLINK libspdk_event_vmd.so 00:03:06.746 SYMLINK libspdk_event_scheduler.so 00:03:06.746 SYMLINK libspdk_event_iobuf.so 00:03:07.312 CC module/event/subsystems/accel/accel.o 00:03:07.312 LIB libspdk_event_accel.a 00:03:07.312 SO libspdk_event_accel.so.6.0 00:03:07.570 SYMLINK libspdk_event_accel.so 00:03:07.828 CC module/event/subsystems/bdev/bdev.o 00:03:08.086 LIB libspdk_event_bdev.a 00:03:08.086 SO libspdk_event_bdev.so.6.0 00:03:08.086 SYMLINK libspdk_event_bdev.so 00:03:08.652 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:08.652 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:08.652 CC module/event/subsystems/nbd/nbd.o 00:03:08.652 CC module/event/subsystems/ublk/ublk.o 00:03:08.652 CC module/event/subsystems/scsi/scsi.o 00:03:08.652 LIB libspdk_event_nbd.a 00:03:08.652 LIB libspdk_event_ublk.a 00:03:08.652 LIB libspdk_event_scsi.a 00:03:08.652 SO libspdk_event_scsi.so.6.0 00:03:08.652 SO libspdk_event_nbd.so.6.0 00:03:08.652 SO libspdk_event_ublk.so.3.0 00:03:08.652 LIB libspdk_event_nvmf.a 00:03:08.652 SYMLINK libspdk_event_nbd.so 00:03:08.652 SYMLINK libspdk_event_ublk.so 00:03:08.652 SYMLINK libspdk_event_scsi.so 00:03:08.652 SO libspdk_event_nvmf.so.6.0 00:03:08.910 SYMLINK libspdk_event_nvmf.so 00:03:09.169 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:09.169 CC module/event/subsystems/iscsi/iscsi.o 00:03:09.427 LIB libspdk_event_vhost_scsi.a 00:03:09.427 LIB libspdk_event_iscsi.a 00:03:09.427 SO libspdk_event_vhost_scsi.so.3.0 00:03:09.427 SO libspdk_event_iscsi.so.6.0 00:03:09.427 SYMLINK libspdk_event_vhost_scsi.so 00:03:09.427 SYMLINK libspdk_event_iscsi.so 00:03:09.685 SO libspdk.so.6.0 00:03:09.685 SYMLINK libspdk.so 00:03:09.943 CXX app/trace/trace.o 00:03:09.943 CC app/trace_record/trace_record.o 00:03:09.943 CC app/spdk_lspci/spdk_lspci.o 00:03:09.943 CC app/iscsi_tgt/iscsi_tgt.o 00:03:09.943 CC app/nvmf_tgt/nvmf_main.o 00:03:10.202 CC app/spdk_tgt/spdk_tgt.o 00:03:10.202 CC test/thread/poller_perf/poller_perf.o 00:03:10.202 CC examples/util/zipf/zipf.o 00:03:10.202 CC examples/ioat/perf/perf.o 00:03:10.202 CC test/dma/test_dma/test_dma.o 00:03:10.202 LINK spdk_lspci 00:03:10.202 LINK nvmf_tgt 00:03:10.202 LINK iscsi_tgt 00:03:10.202 LINK poller_perf 00:03:10.202 LINK spdk_trace_record 00:03:10.202 LINK zipf 00:03:10.202 LINK spdk_tgt 00:03:10.460 LINK ioat_perf 00:03:10.460 LINK spdk_trace 00:03:10.460 CC app/spdk_nvme_perf/perf.o 00:03:10.460 TEST_HEADER include/spdk/accel.h 00:03:10.460 TEST_HEADER include/spdk/accel_module.h 00:03:10.460 TEST_HEADER include/spdk/assert.h 00:03:10.460 TEST_HEADER include/spdk/barrier.h 00:03:10.460 TEST_HEADER include/spdk/base64.h 00:03:10.460 TEST_HEADER include/spdk/bdev.h 00:03:10.460 TEST_HEADER include/spdk/bdev_module.h 00:03:10.460 TEST_HEADER include/spdk/bdev_zone.h 00:03:10.460 TEST_HEADER include/spdk/bit_array.h 00:03:10.460 TEST_HEADER include/spdk/bit_pool.h 00:03:10.461 CC test/rpc_client/rpc_client_test.o 00:03:10.461 TEST_HEADER include/spdk/blob_bdev.h 00:03:10.461 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:10.461 TEST_HEADER include/spdk/blobfs.h 00:03:10.461 TEST_HEADER include/spdk/blob.h 00:03:10.461 TEST_HEADER include/spdk/conf.h 00:03:10.461 TEST_HEADER include/spdk/config.h 00:03:10.461 TEST_HEADER include/spdk/cpuset.h 00:03:10.461 TEST_HEADER include/spdk/crc16.h 00:03:10.461 TEST_HEADER include/spdk/crc32.h 00:03:10.461 TEST_HEADER include/spdk/crc64.h 00:03:10.461 TEST_HEADER include/spdk/dif.h 00:03:10.461 TEST_HEADER include/spdk/dma.h 00:03:10.461 TEST_HEADER include/spdk/endian.h 00:03:10.461 TEST_HEADER include/spdk/env_dpdk.h 00:03:10.461 TEST_HEADER include/spdk/env.h 00:03:10.461 TEST_HEADER include/spdk/event.h 00:03:10.461 TEST_HEADER include/spdk/fd_group.h 00:03:10.461 TEST_HEADER include/spdk/fd.h 00:03:10.461 TEST_HEADER include/spdk/file.h 00:03:10.720 TEST_HEADER include/spdk/fsdev.h 00:03:10.720 TEST_HEADER include/spdk/fsdev_module.h 00:03:10.720 TEST_HEADER include/spdk/ftl.h 00:03:10.720 TEST_HEADER include/spdk/gpt_spec.h 00:03:10.720 TEST_HEADER include/spdk/hexlify.h 00:03:10.720 CC examples/ioat/verify/verify.o 00:03:10.720 TEST_HEADER include/spdk/histogram_data.h 00:03:10.720 TEST_HEADER include/spdk/idxd.h 00:03:10.720 TEST_HEADER include/spdk/idxd_spec.h 00:03:10.720 TEST_HEADER include/spdk/init.h 00:03:10.720 CC app/spdk_nvme_identify/identify.o 00:03:10.720 TEST_HEADER include/spdk/ioat.h 00:03:10.720 TEST_HEADER include/spdk/ioat_spec.h 00:03:10.720 TEST_HEADER include/spdk/iscsi_spec.h 00:03:10.720 TEST_HEADER include/spdk/json.h 00:03:10.720 TEST_HEADER include/spdk/jsonrpc.h 00:03:10.720 TEST_HEADER include/spdk/keyring.h 00:03:10.720 TEST_HEADER include/spdk/keyring_module.h 00:03:10.720 TEST_HEADER include/spdk/likely.h 00:03:10.720 TEST_HEADER include/spdk/log.h 00:03:10.720 TEST_HEADER include/spdk/lvol.h 00:03:10.720 TEST_HEADER include/spdk/md5.h 00:03:10.720 TEST_HEADER include/spdk/memory.h 00:03:10.720 TEST_HEADER include/spdk/mmio.h 00:03:10.720 TEST_HEADER include/spdk/nbd.h 00:03:10.720 TEST_HEADER include/spdk/net.h 00:03:10.720 CC test/app/bdev_svc/bdev_svc.o 00:03:10.720 TEST_HEADER include/spdk/notify.h 00:03:10.720 TEST_HEADER include/spdk/nvme.h 00:03:10.720 CC test/event/event_perf/event_perf.o 00:03:10.720 TEST_HEADER include/spdk/nvme_intel.h 00:03:10.720 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:10.720 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:10.720 TEST_HEADER include/spdk/nvme_spec.h 00:03:10.720 TEST_HEADER include/spdk/nvme_zns.h 00:03:10.720 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:10.720 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:10.720 TEST_HEADER include/spdk/nvmf.h 00:03:10.720 TEST_HEADER include/spdk/nvmf_spec.h 00:03:10.720 TEST_HEADER include/spdk/nvmf_transport.h 00:03:10.720 TEST_HEADER include/spdk/opal.h 00:03:10.720 TEST_HEADER include/spdk/opal_spec.h 00:03:10.720 LINK test_dma 00:03:10.720 TEST_HEADER include/spdk/pci_ids.h 00:03:10.720 TEST_HEADER include/spdk/pipe.h 00:03:10.720 TEST_HEADER include/spdk/queue.h 00:03:10.720 TEST_HEADER include/spdk/reduce.h 00:03:10.720 TEST_HEADER include/spdk/rpc.h 00:03:10.720 TEST_HEADER include/spdk/scheduler.h 00:03:10.720 TEST_HEADER include/spdk/scsi.h 00:03:10.720 TEST_HEADER include/spdk/scsi_spec.h 00:03:10.720 TEST_HEADER include/spdk/sock.h 00:03:10.720 TEST_HEADER include/spdk/stdinc.h 00:03:10.720 TEST_HEADER include/spdk/string.h 00:03:10.720 TEST_HEADER include/spdk/thread.h 00:03:10.720 TEST_HEADER include/spdk/trace.h 00:03:10.720 TEST_HEADER include/spdk/trace_parser.h 00:03:10.720 TEST_HEADER include/spdk/tree.h 00:03:10.720 TEST_HEADER include/spdk/ublk.h 00:03:10.720 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:10.720 TEST_HEADER include/spdk/util.h 00:03:10.720 TEST_HEADER include/spdk/uuid.h 00:03:10.720 TEST_HEADER include/spdk/version.h 00:03:10.720 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:10.720 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:10.720 TEST_HEADER include/spdk/vhost.h 00:03:10.720 TEST_HEADER include/spdk/vmd.h 00:03:10.720 TEST_HEADER include/spdk/xor.h 00:03:10.720 TEST_HEADER include/spdk/zipf.h 00:03:10.720 CXX test/cpp_headers/accel.o 00:03:10.720 CC test/env/mem_callbacks/mem_callbacks.o 00:03:10.720 LINK rpc_client_test 00:03:10.720 LINK event_perf 00:03:10.720 LINK bdev_svc 00:03:10.720 LINK verify 00:03:10.979 LINK interrupt_tgt 00:03:10.979 CXX test/cpp_headers/accel_module.o 00:03:10.979 CXX test/cpp_headers/assert.o 00:03:10.979 CXX test/cpp_headers/barrier.o 00:03:10.979 CC test/event/reactor/reactor.o 00:03:10.979 CXX test/cpp_headers/base64.o 00:03:10.979 CC examples/thread/thread/thread_ex.o 00:03:11.237 CC app/spdk_top/spdk_top.o 00:03:11.237 LINK reactor 00:03:11.237 CC app/spdk_nvme_discover/discovery_aer.o 00:03:11.237 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:11.237 CXX test/cpp_headers/bdev.o 00:03:11.237 LINK mem_callbacks 00:03:11.237 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:11.237 LINK thread 00:03:11.237 LINK spdk_nvme_perf 00:03:11.237 LINK spdk_nvme_discover 00:03:11.237 CC test/event/reactor_perf/reactor_perf.o 00:03:11.495 CXX test/cpp_headers/bdev_module.o 00:03:11.495 CC test/env/vtophys/vtophys.o 00:03:11.495 LINK reactor_perf 00:03:11.495 LINK spdk_nvme_identify 00:03:11.495 CXX test/cpp_headers/bdev_zone.o 00:03:11.495 LINK nvme_fuzz 00:03:11.754 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:11.754 CC test/env/memory/memory_ut.o 00:03:11.754 LINK vtophys 00:03:11.754 CC examples/sock/hello_world/hello_sock.o 00:03:11.754 LINK env_dpdk_post_init 00:03:11.754 CC test/event/app_repeat/app_repeat.o 00:03:11.754 CXX test/cpp_headers/bit_array.o 00:03:11.754 CC test/event/scheduler/scheduler.o 00:03:12.012 CC test/app/histogram_perf/histogram_perf.o 00:03:12.012 CC test/env/pci/pci_ut.o 00:03:12.012 LINK app_repeat 00:03:12.012 CXX test/cpp_headers/bit_pool.o 00:03:12.012 LINK hello_sock 00:03:12.012 LINK histogram_perf 00:03:12.013 LINK scheduler 00:03:12.013 CXX test/cpp_headers/blobfs_bdev.o 00:03:12.013 CXX test/cpp_headers/blob_bdev.o 00:03:12.013 LINK spdk_top 00:03:12.269 CC test/accel/dif/dif.o 00:03:12.269 CXX test/cpp_headers/blobfs.o 00:03:12.269 LINK pci_ut 00:03:12.269 CC examples/vmd/lsvmd/lsvmd.o 00:03:12.269 CC examples/vmd/led/led.o 00:03:12.269 CC test/blobfs/mkfs/mkfs.o 00:03:12.527 CC app/vhost/vhost.o 00:03:12.527 CXX test/cpp_headers/blob.o 00:03:12.527 LINK lsvmd 00:03:12.527 LINK led 00:03:12.527 LINK mkfs 00:03:12.527 CC test/lvol/esnap/esnap.o 00:03:12.527 LINK vhost 00:03:12.527 CXX test/cpp_headers/conf.o 00:03:12.527 CXX test/cpp_headers/config.o 00:03:12.785 CXX test/cpp_headers/cpuset.o 00:03:12.785 CC test/nvme/aer/aer.o 00:03:12.785 CC test/nvme/reset/reset.o 00:03:12.785 LINK memory_ut 00:03:12.785 CC examples/idxd/perf/perf.o 00:03:12.785 CXX test/cpp_headers/crc16.o 00:03:12.785 CC app/spdk_dd/spdk_dd.o 00:03:13.043 LINK dif 00:03:13.043 CC app/fio/nvme/fio_plugin.o 00:03:13.043 CXX test/cpp_headers/crc32.o 00:03:13.043 LINK aer 00:03:13.043 LINK reset 00:03:13.043 LINK iscsi_fuzz 00:03:13.043 CXX test/cpp_headers/crc64.o 00:03:13.301 CC app/fio/bdev/fio_plugin.o 00:03:13.301 LINK idxd_perf 00:03:13.301 CXX test/cpp_headers/dif.o 00:03:13.301 CXX test/cpp_headers/dma.o 00:03:13.301 LINK spdk_dd 00:03:13.301 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:13.301 CC test/nvme/sgl/sgl.o 00:03:13.301 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:13.301 CXX test/cpp_headers/endian.o 00:03:13.301 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:13.559 CXX test/cpp_headers/env_dpdk.o 00:03:13.559 LINK spdk_nvme 00:03:13.559 LINK sgl 00:03:13.559 CC examples/accel/perf/accel_perf.o 00:03:13.559 LINK hello_fsdev 00:03:13.559 CC test/bdev/bdevio/bdevio.o 00:03:13.559 CC examples/blob/hello_world/hello_blob.o 00:03:13.559 LINK spdk_bdev 00:03:13.817 CXX test/cpp_headers/env.o 00:03:13.817 CC examples/blob/cli/blobcli.o 00:03:13.817 CXX test/cpp_headers/event.o 00:03:13.817 CXX test/cpp_headers/fd_group.o 00:03:13.817 CC test/nvme/e2edp/nvme_dp.o 00:03:13.817 LINK vhost_fuzz 00:03:13.817 LINK hello_blob 00:03:14.076 CXX test/cpp_headers/fd.o 00:03:14.077 LINK bdevio 00:03:14.077 CC examples/nvme/hello_world/hello_world.o 00:03:14.077 CC test/app/jsoncat/jsoncat.o 00:03:14.077 CC test/nvme/overhead/overhead.o 00:03:14.077 CXX test/cpp_headers/file.o 00:03:14.077 LINK nvme_dp 00:03:14.077 LINK accel_perf 00:03:14.077 CC test/nvme/err_injection/err_injection.o 00:03:14.077 LINK jsoncat 00:03:14.336 CXX test/cpp_headers/fsdev.o 00:03:14.336 LINK hello_world 00:03:14.336 LINK blobcli 00:03:14.336 CXX test/cpp_headers/fsdev_module.o 00:03:14.336 CC examples/nvme/reconnect/reconnect.o 00:03:14.336 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:14.336 LINK err_injection 00:03:14.336 LINK overhead 00:03:14.336 CXX test/cpp_headers/ftl.o 00:03:14.336 CC test/app/stub/stub.o 00:03:14.595 CC test/nvme/startup/startup.o 00:03:14.595 CXX test/cpp_headers/gpt_spec.o 00:03:14.595 LINK stub 00:03:14.595 CC examples/nvme/arbitration/arbitration.o 00:03:14.595 CC test/nvme/reserve/reserve.o 00:03:14.595 CC test/nvme/simple_copy/simple_copy.o 00:03:14.595 CC examples/bdev/hello_world/hello_bdev.o 00:03:14.595 LINK startup 00:03:14.595 CXX test/cpp_headers/hexlify.o 00:03:14.595 LINK reconnect 00:03:14.854 CXX test/cpp_headers/histogram_data.o 00:03:14.854 LINK reserve 00:03:14.854 LINK nvme_manage 00:03:14.854 LINK simple_copy 00:03:14.854 LINK hello_bdev 00:03:14.854 CXX test/cpp_headers/idxd.o 00:03:14.854 LINK arbitration 00:03:14.854 CC test/nvme/connect_stress/connect_stress.o 00:03:14.854 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:14.854 CC examples/nvme/hotplug/hotplug.o 00:03:15.113 CXX test/cpp_headers/idxd_spec.o 00:03:15.113 CC test/nvme/boot_partition/boot_partition.o 00:03:15.113 LINK connect_stress 00:03:15.113 CC examples/nvme/abort/abort.o 00:03:15.113 LINK cmb_copy 00:03:15.113 CC examples/bdev/bdevperf/bdevperf.o 00:03:15.113 CC test/nvme/compliance/nvme_compliance.o 00:03:15.113 CC test/nvme/fused_ordering/fused_ordering.o 00:03:15.113 LINK hotplug 00:03:15.113 CXX test/cpp_headers/init.o 00:03:15.113 LINK boot_partition 00:03:15.372 CXX test/cpp_headers/ioat.o 00:03:15.372 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:15.372 CC test/nvme/fdp/fdp.o 00:03:15.372 CXX test/cpp_headers/ioat_spec.o 00:03:15.372 LINK fused_ordering 00:03:15.372 CC test/nvme/cuse/cuse.o 00:03:15.631 LINK abort 00:03:15.631 LINK nvme_compliance 00:03:15.631 CXX test/cpp_headers/iscsi_spec.o 00:03:15.631 LINK doorbell_aers 00:03:15.631 CXX test/cpp_headers/json.o 00:03:15.631 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:15.631 CXX test/cpp_headers/jsonrpc.o 00:03:15.631 CXX test/cpp_headers/keyring.o 00:03:15.631 CXX test/cpp_headers/keyring_module.o 00:03:15.631 CXX test/cpp_headers/likely.o 00:03:15.631 CXX test/cpp_headers/log.o 00:03:15.631 LINK fdp 00:03:15.631 LINK pmr_persistence 00:03:15.890 CXX test/cpp_headers/lvol.o 00:03:15.890 CXX test/cpp_headers/md5.o 00:03:15.890 CXX test/cpp_headers/memory.o 00:03:15.890 CXX test/cpp_headers/mmio.o 00:03:15.890 CXX test/cpp_headers/nbd.o 00:03:15.890 CXX test/cpp_headers/net.o 00:03:15.890 CXX test/cpp_headers/notify.o 00:03:15.890 CXX test/cpp_headers/nvme.o 00:03:15.890 CXX test/cpp_headers/nvme_intel.o 00:03:15.890 CXX test/cpp_headers/nvme_ocssd.o 00:03:15.890 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:16.149 LINK bdevperf 00:03:16.149 CXX test/cpp_headers/nvme_spec.o 00:03:16.149 CXX test/cpp_headers/nvme_zns.o 00:03:16.149 CXX test/cpp_headers/nvmf_cmd.o 00:03:16.149 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:16.149 CXX test/cpp_headers/nvmf.o 00:03:16.149 CXX test/cpp_headers/nvmf_spec.o 00:03:16.149 CXX test/cpp_headers/nvmf_transport.o 00:03:16.149 CXX test/cpp_headers/opal.o 00:03:16.149 CXX test/cpp_headers/opal_spec.o 00:03:16.149 CXX test/cpp_headers/pci_ids.o 00:03:16.149 CXX test/cpp_headers/pipe.o 00:03:16.408 CXX test/cpp_headers/queue.o 00:03:16.408 CXX test/cpp_headers/reduce.o 00:03:16.408 CXX test/cpp_headers/rpc.o 00:03:16.408 CXX test/cpp_headers/scheduler.o 00:03:16.408 CXX test/cpp_headers/scsi.o 00:03:16.408 CXX test/cpp_headers/scsi_spec.o 00:03:16.408 CXX test/cpp_headers/sock.o 00:03:16.408 CXX test/cpp_headers/stdinc.o 00:03:16.408 CC examples/nvmf/nvmf/nvmf.o 00:03:16.408 CXX test/cpp_headers/string.o 00:03:16.408 CXX test/cpp_headers/thread.o 00:03:16.408 CXX test/cpp_headers/trace.o 00:03:16.666 CXX test/cpp_headers/trace_parser.o 00:03:16.666 CXX test/cpp_headers/tree.o 00:03:16.666 CXX test/cpp_headers/ublk.o 00:03:16.666 CXX test/cpp_headers/util.o 00:03:16.666 CXX test/cpp_headers/uuid.o 00:03:16.666 CXX test/cpp_headers/version.o 00:03:16.666 CXX test/cpp_headers/vfio_user_pci.o 00:03:16.666 CXX test/cpp_headers/vfio_user_spec.o 00:03:16.666 CXX test/cpp_headers/vhost.o 00:03:16.666 CXX test/cpp_headers/vmd.o 00:03:16.666 CXX test/cpp_headers/xor.o 00:03:16.666 LINK nvmf 00:03:16.666 CXX test/cpp_headers/zipf.o 00:03:16.666 LINK cuse 00:03:18.568 LINK esnap 00:03:18.826 00:03:18.826 real 1m21.337s 00:03:18.826 user 7m0.948s 00:03:18.826 sys 1m46.607s 00:03:18.826 13:44:11 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:18.826 13:44:11 make -- common/autotest_common.sh@10 -- $ set +x 00:03:18.826 ************************************ 00:03:18.826 END TEST make 00:03:18.826 ************************************ 00:03:18.826 13:44:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:18.826 13:44:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:18.826 13:44:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:18.826 13:44:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.826 13:44:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:18.826 13:44:11 -- pm/common@44 -- $ pid=5296 00:03:18.826 13:44:11 -- pm/common@50 -- $ kill -TERM 5296 00:03:18.826 13:44:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.826 13:44:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:18.826 13:44:11 -- pm/common@44 -- $ pid=5298 00:03:18.826 13:44:11 -- pm/common@50 -- $ kill -TERM 5298 00:03:18.826 13:44:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:18.826 13:44:11 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:19.086 13:44:11 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:19.086 13:44:11 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:19.086 13:44:11 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:19.086 13:44:11 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:19.086 13:44:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:19.086 13:44:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:19.086 13:44:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:19.086 13:44:11 -- scripts/common.sh@336 -- # IFS=.-: 00:03:19.086 13:44:11 -- scripts/common.sh@336 -- # read -ra ver1 00:03:19.086 13:44:11 -- scripts/common.sh@337 -- # IFS=.-: 00:03:19.086 13:44:11 -- scripts/common.sh@337 -- # read -ra ver2 00:03:19.086 13:44:11 -- scripts/common.sh@338 -- # local 'op=<' 00:03:19.086 13:44:11 -- scripts/common.sh@340 -- # ver1_l=2 00:03:19.086 13:44:11 -- scripts/common.sh@341 -- # ver2_l=1 00:03:19.086 13:44:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:19.086 13:44:11 -- scripts/common.sh@344 -- # case "$op" in 00:03:19.086 13:44:11 -- scripts/common.sh@345 -- # : 1 00:03:19.086 13:44:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:19.086 13:44:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:19.086 13:44:11 -- scripts/common.sh@365 -- # decimal 1 00:03:19.086 13:44:11 -- scripts/common.sh@353 -- # local d=1 00:03:19.086 13:44:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:19.086 13:44:11 -- scripts/common.sh@355 -- # echo 1 00:03:19.086 13:44:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:19.086 13:44:11 -- scripts/common.sh@366 -- # decimal 2 00:03:19.086 13:44:12 -- scripts/common.sh@353 -- # local d=2 00:03:19.086 13:44:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:19.086 13:44:12 -- scripts/common.sh@355 -- # echo 2 00:03:19.086 13:44:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:19.086 13:44:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:19.086 13:44:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:19.086 13:44:12 -- scripts/common.sh@368 -- # return 0 00:03:19.086 13:44:12 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:19.086 13:44:12 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:19.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.086 --rc genhtml_branch_coverage=1 00:03:19.086 --rc genhtml_function_coverage=1 00:03:19.086 --rc genhtml_legend=1 00:03:19.086 --rc geninfo_all_blocks=1 00:03:19.086 --rc geninfo_unexecuted_blocks=1 00:03:19.086 00:03:19.086 ' 00:03:19.086 13:44:12 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:19.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.086 --rc genhtml_branch_coverage=1 00:03:19.086 --rc genhtml_function_coverage=1 00:03:19.086 --rc genhtml_legend=1 00:03:19.086 --rc geninfo_all_blocks=1 00:03:19.086 --rc geninfo_unexecuted_blocks=1 00:03:19.086 00:03:19.086 ' 00:03:19.086 13:44:12 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:19.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.086 --rc genhtml_branch_coverage=1 00:03:19.086 --rc genhtml_function_coverage=1 00:03:19.086 --rc genhtml_legend=1 00:03:19.086 --rc geninfo_all_blocks=1 00:03:19.086 --rc geninfo_unexecuted_blocks=1 00:03:19.086 00:03:19.086 ' 00:03:19.086 13:44:12 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:19.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.086 --rc genhtml_branch_coverage=1 00:03:19.086 --rc genhtml_function_coverage=1 00:03:19.086 --rc genhtml_legend=1 00:03:19.086 --rc geninfo_all_blocks=1 00:03:19.086 --rc geninfo_unexecuted_blocks=1 00:03:19.086 00:03:19.086 ' 00:03:19.086 13:44:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:19.086 13:44:12 -- nvmf/common.sh@7 -- # uname -s 00:03:19.086 13:44:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:19.086 13:44:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:19.086 13:44:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:19.086 13:44:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:19.086 13:44:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:19.086 13:44:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:19.086 13:44:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:19.086 13:44:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:19.086 13:44:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:19.086 13:44:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:19.086 13:44:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63a1cab5-85ed-4611-9076-2b12eeaf9a9e 00:03:19.086 13:44:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=63a1cab5-85ed-4611-9076-2b12eeaf9a9e 00:03:19.086 13:44:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:19.086 13:44:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:19.086 13:44:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:19.086 13:44:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:19.086 13:44:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:19.086 13:44:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:19.086 13:44:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:19.086 13:44:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.086 13:44:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.086 13:44:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.086 13:44:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.086 13:44:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.086 13:44:12 -- paths/export.sh@5 -- # export PATH 00:03:19.087 13:44:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.087 13:44:12 -- nvmf/common.sh@51 -- # : 0 00:03:19.087 13:44:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:19.087 13:44:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:19.087 13:44:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:19.087 13:44:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:19.087 13:44:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:19.087 13:44:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:19.087 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:19.087 13:44:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:19.087 13:44:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:19.087 13:44:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:19.087 13:44:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:19.087 13:44:12 -- spdk/autotest.sh@32 -- # uname -s 00:03:19.087 13:44:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:19.087 13:44:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:19.087 13:44:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.087 13:44:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:19.087 13:44:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.087 13:44:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:19.087 13:44:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:19.087 13:44:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:19.087 13:44:12 -- spdk/autotest.sh@48 -- # udevadm_pid=55938 00:03:19.087 13:44:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:19.087 13:44:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:19.087 13:44:12 -- pm/common@17 -- # local monitor 00:03:19.087 13:44:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.087 13:44:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.087 13:44:12 -- pm/common@25 -- # sleep 1 00:03:19.087 13:44:12 -- pm/common@21 -- # date +%s 00:03:19.087 13:44:12 -- pm/common@21 -- # date +%s 00:03:19.087 13:44:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733924652 00:03:19.087 13:44:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733924652 00:03:19.345 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733924652_collect-cpu-load.pm.log 00:03:19.345 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733924652_collect-vmstat.pm.log 00:03:20.280 13:44:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:20.281 13:44:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:20.281 13:44:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:20.281 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:03:20.281 13:44:13 -- spdk/autotest.sh@59 -- # create_test_list 00:03:20.281 13:44:13 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:20.281 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:03:20.281 13:44:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:20.281 13:44:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:20.281 13:44:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:20.281 13:44:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:20.281 13:44:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:20.281 13:44:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:20.281 13:44:13 -- common/autotest_common.sh@1457 -- # uname 00:03:20.281 13:44:13 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:20.281 13:44:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:20.281 13:44:13 -- common/autotest_common.sh@1477 -- # uname 00:03:20.281 13:44:13 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:20.281 13:44:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:20.281 13:44:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:20.281 lcov: LCOV version 1.15 00:03:20.281 13:44:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:35.158 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:35.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:50.039 13:44:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:50.039 13:44:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.039 13:44:42 -- common/autotest_common.sh@10 -- # set +x 00:03:50.039 13:44:42 -- spdk/autotest.sh@78 -- # rm -f 00:03:50.039 13:44:42 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.234 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:51.234 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:51.234 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:51.234 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:51.234 13:44:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:51.234 13:44:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:51.234 13:44:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:51.234 13:44:44 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:51.234 13:44:44 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:51.234 13:44:44 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:51.234 13:44:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:51.234 13:44:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:51.234 13:44:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:51.234 13:44:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:51.234 13:44:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:51.234 13:44:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:51.234 13:44:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:51.234 13:44:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:51.234 13:44:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:51.234 13:44:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:51.234 13:44:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:51.234 13:44:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:03:51.234 13:44:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:51.234 13:44:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:03:51.234 13:44:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:51.234 13:44:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:51.234 13:44:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:03:51.234 13:44:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:03:51.234 13:44:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:51.234 13:44:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:03:51.234 13:44:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:03:51.234 13:44:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:51.234 13:44:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:03:51.234 13:44:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:51.234 13:44:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:03:51.234 13:44:44 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:51.234 13:44:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:51.234 13:44:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:51.235 13:44:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:51.235 13:44:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.235 13:44:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:51.235 13:44:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:51.235 13:44:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:51.235 13:44:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:51.235 No valid GPT data, bailing 00:03:51.235 13:44:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.235 13:44:44 -- scripts/common.sh@394 -- # pt= 00:03:51.235 13:44:44 -- scripts/common.sh@395 -- # return 1 00:03:51.235 13:44:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:51.235 1+0 records in 00:03:51.235 1+0 records out 00:03:51.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171987 s, 61.0 MB/s 00:03:51.235 13:44:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.235 13:44:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:51.235 13:44:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:51.235 13:44:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:51.235 13:44:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:51.235 No valid GPT data, bailing 00:03:51.235 13:44:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:51.235 13:44:44 -- scripts/common.sh@394 -- # pt= 00:03:51.235 13:44:44 -- scripts/common.sh@395 -- # return 1 00:03:51.235 13:44:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:51.494 1+0 records in 00:03:51.494 1+0 records out 00:03:51.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599272 s, 175 MB/s 00:03:51.494 13:44:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.494 13:44:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:51.494 13:44:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:51.494 13:44:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:51.494 13:44:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:51.494 No valid GPT data, bailing 00:03:51.494 13:44:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:51.494 13:44:44 -- scripts/common.sh@394 -- # pt= 00:03:51.494 13:44:44 -- scripts/common.sh@395 -- # return 1 00:03:51.494 13:44:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:51.494 1+0 records in 00:03:51.494 1+0 records out 00:03:51.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635027 s, 165 MB/s 00:03:51.494 13:44:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.494 13:44:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:51.494 13:44:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:51.494 13:44:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:51.494 13:44:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:51.494 No valid GPT data, bailing 00:03:51.494 13:44:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:51.494 13:44:44 -- scripts/common.sh@394 -- # pt= 00:03:51.494 13:44:44 -- scripts/common.sh@395 -- # return 1 00:03:51.494 13:44:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:51.494 1+0 records in 00:03:51.494 1+0 records out 00:03:51.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00681497 s, 154 MB/s 00:03:51.494 13:44:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.494 13:44:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:51.494 13:44:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:51.494 13:44:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:51.494 13:44:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:51.494 No valid GPT data, bailing 00:03:51.494 13:44:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:51.494 13:44:44 -- scripts/common.sh@394 -- # pt= 00:03:51.494 13:44:44 -- scripts/common.sh@395 -- # return 1 00:03:51.494 13:44:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:51.494 1+0 records in 00:03:51.494 1+0 records out 00:03:51.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619307 s, 169 MB/s 00:03:51.753 13:44:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.753 13:44:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:51.753 13:44:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:51.753 13:44:44 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:51.753 13:44:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:51.753 No valid GPT data, bailing 00:03:51.753 13:44:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:51.753 13:44:44 -- scripts/common.sh@394 -- # pt= 00:03:51.753 13:44:44 -- scripts/common.sh@395 -- # return 1 00:03:51.753 13:44:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:51.753 1+0 records in 00:03:51.753 1+0 records out 00:03:51.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579103 s, 181 MB/s 00:03:51.753 13:44:44 -- spdk/autotest.sh@105 -- # sync 00:03:51.753 13:44:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:51.753 13:44:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:51.753 13:44:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:55.096 13:44:47 -- spdk/autotest.sh@111 -- # uname -s 00:03:55.096 13:44:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:55.096 13:44:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:55.096 13:44:47 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:55.354 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.923 Hugepages 00:03:55.923 node hugesize free / total 00:03:55.923 node0 1048576kB 0 / 0 00:03:55.923 node0 2048kB 0 / 0 00:03:55.923 00:03:55.923 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:55.923 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:56.182 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:56.182 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:56.441 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:56.441 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:56.441 13:44:49 -- spdk/autotest.sh@117 -- # uname -s 00:03:56.441 13:44:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:56.441 13:44:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:56.441 13:44:49 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:57.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.946 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.946 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.946 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.946 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.205 13:44:51 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:59.143 13:44:52 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:59.143 13:44:52 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:59.143 13:44:52 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:59.143 13:44:52 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:59.143 13:44:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:59.143 13:44:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:59.143 13:44:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.143 13:44:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:59.143 13:44:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:59.143 13:44:52 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:59.143 13:44:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:59.143 13:44:52 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.971 Waiting for block devices as requested 00:04:00.230 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.230 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.230 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.488 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.779 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:05.779 13:44:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:05.779 13:44:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:05.779 13:44:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:05.779 13:44:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:05.779 13:44:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:05.779 13:44:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:05.779 13:44:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:05.779 13:44:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:05.779 13:44:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1543 -- # continue 00:04:05.779 13:44:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:05.779 13:44:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:05.779 13:44:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:05.779 13:44:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:05.779 13:44:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:05.779 13:44:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:05.779 13:44:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:05.779 13:44:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:05.779 13:44:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1543 -- # continue 00:04:05.779 13:44:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:05.779 13:44:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:05.779 13:44:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:05.779 13:44:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:05.779 13:44:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1543 -- # continue 00:04:05.779 13:44:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:05.779 13:44:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:05.779 13:44:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:05.779 13:44:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:05.779 13:44:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:05.779 13:44:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:05.779 13:44:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:05.779 13:44:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:05.779 13:44:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:05.779 13:44:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:05.779 13:44:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:05.779 13:44:58 -- common/autotest_common.sh@1543 -- # continue 00:04:05.779 13:44:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:05.779 13:44:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.779 13:44:58 -- common/autotest_common.sh@10 -- # set +x 00:04:05.779 13:44:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:05.779 13:44:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.779 13:44:58 -- common/autotest_common.sh@10 -- # set +x 00:04:05.779 13:44:58 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.297 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.297 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.297 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.297 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.557 13:45:00 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:07.557 13:45:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.557 13:45:00 -- common/autotest_common.sh@10 -- # set +x 00:04:07.557 13:45:00 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:07.557 13:45:00 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:07.557 13:45:00 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:07.557 13:45:00 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:07.557 13:45:00 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:07.557 13:45:00 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:07.557 13:45:00 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:07.557 13:45:00 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:07.557 13:45:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:07.557 13:45:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:07.557 13:45:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:07.557 13:45:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:07.558 13:45:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:07.558 13:45:00 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:07.558 13:45:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:07.558 13:45:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:07.558 13:45:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:07.558 13:45:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:07.558 13:45:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.558 13:45:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:07.558 13:45:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:07.558 13:45:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:07.558 13:45:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.558 13:45:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:07.558 13:45:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:07.558 13:45:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:07.558 13:45:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.558 13:45:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:07.558 13:45:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:07.558 13:45:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:07.558 13:45:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.558 13:45:00 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:07.558 13:45:00 -- common/autotest_common.sh@1572 -- # return 0 00:04:07.558 13:45:00 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:07.558 13:45:00 -- common/autotest_common.sh@1580 -- # return 0 00:04:07.558 13:45:00 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:07.817 13:45:00 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:07.817 13:45:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:07.817 13:45:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:07.817 13:45:00 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:07.817 13:45:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.817 13:45:00 -- common/autotest_common.sh@10 -- # set +x 00:04:07.817 13:45:00 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:07.817 13:45:00 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:07.817 13:45:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.817 13:45:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.817 13:45:00 -- common/autotest_common.sh@10 -- # set +x 00:04:07.817 ************************************ 00:04:07.817 START TEST env 00:04:07.817 ************************************ 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:07.817 * Looking for test storage... 00:04:07.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:07.817 13:45:00 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.817 13:45:00 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.817 13:45:00 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.817 13:45:00 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.817 13:45:00 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.817 13:45:00 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.817 13:45:00 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.817 13:45:00 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.817 13:45:00 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.817 13:45:00 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.817 13:45:00 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.817 13:45:00 env -- scripts/common.sh@344 -- # case "$op" in 00:04:07.817 13:45:00 env -- scripts/common.sh@345 -- # : 1 00:04:07.817 13:45:00 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.817 13:45:00 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.817 13:45:00 env -- scripts/common.sh@365 -- # decimal 1 00:04:07.817 13:45:00 env -- scripts/common.sh@353 -- # local d=1 00:04:07.817 13:45:00 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.817 13:45:00 env -- scripts/common.sh@355 -- # echo 1 00:04:07.817 13:45:00 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.817 13:45:00 env -- scripts/common.sh@366 -- # decimal 2 00:04:07.817 13:45:00 env -- scripts/common.sh@353 -- # local d=2 00:04:07.817 13:45:00 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.817 13:45:00 env -- scripts/common.sh@355 -- # echo 2 00:04:07.817 13:45:00 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.817 13:45:00 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.817 13:45:00 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.817 13:45:00 env -- scripts/common.sh@368 -- # return 0 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.817 --rc genhtml_branch_coverage=1 00:04:07.817 --rc genhtml_function_coverage=1 00:04:07.817 --rc genhtml_legend=1 00:04:07.817 --rc geninfo_all_blocks=1 00:04:07.817 --rc geninfo_unexecuted_blocks=1 00:04:07.817 00:04:07.817 ' 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.817 --rc genhtml_branch_coverage=1 00:04:07.817 --rc genhtml_function_coverage=1 00:04:07.817 --rc genhtml_legend=1 00:04:07.817 --rc geninfo_all_blocks=1 00:04:07.817 --rc geninfo_unexecuted_blocks=1 00:04:07.817 00:04:07.817 ' 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.817 --rc genhtml_branch_coverage=1 00:04:07.817 --rc genhtml_function_coverage=1 00:04:07.817 --rc genhtml_legend=1 00:04:07.817 --rc geninfo_all_blocks=1 00:04:07.817 --rc geninfo_unexecuted_blocks=1 00:04:07.817 00:04:07.817 ' 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.817 --rc genhtml_branch_coverage=1 00:04:07.817 --rc genhtml_function_coverage=1 00:04:07.817 --rc genhtml_legend=1 00:04:07.817 --rc geninfo_all_blocks=1 00:04:07.817 --rc geninfo_unexecuted_blocks=1 00:04:07.817 00:04:07.817 ' 00:04:07.817 13:45:00 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.817 13:45:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.817 13:45:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.817 ************************************ 00:04:07.817 START TEST env_memory 00:04:07.817 ************************************ 00:04:07.817 13:45:00 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.077 00:04:08.077 00:04:08.077 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.077 http://cunit.sourceforge.net/ 00:04:08.077 00:04:08.077 00:04:08.077 Suite: memory 00:04:08.077 Test: alloc and free memory map ...[2024-12-11 13:45:00.920284] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:08.077 passed 00:04:08.077 Test: mem map translation ...[2024-12-11 13:45:00.965625] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:08.077 [2024-12-11 13:45:00.965674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:08.077 [2024-12-11 13:45:00.965742] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:08.077 [2024-12-11 13:45:00.965766] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:08.077 passed 00:04:08.077 Test: mem map registration ...[2024-12-11 13:45:01.033671] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:08.077 [2024-12-11 13:45:01.033726] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:08.077 passed 00:04:08.338 Test: mem map adjacent registrations ...passed 00:04:08.338 00:04:08.338 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.338 suites 1 1 n/a 0 0 00:04:08.338 tests 4 4 4 0 0 00:04:08.338 asserts 152 152 152 0 n/a 00:04:08.338 00:04:08.338 Elapsed time = 0.241 seconds 00:04:08.338 00:04:08.338 real 0m0.296s 00:04:08.338 user 0m0.254s 00:04:08.338 sys 0m0.032s 00:04:08.338 ************************************ 00:04:08.338 END TEST env_memory 00:04:08.338 ************************************ 00:04:08.338 13:45:01 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.338 13:45:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:08.338 13:45:01 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.338 13:45:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.338 13:45:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.338 13:45:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.338 ************************************ 00:04:08.338 START TEST env_vtophys 00:04:08.338 ************************************ 00:04:08.338 13:45:01 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.338 EAL: lib.eal log level changed from notice to debug 00:04:08.338 EAL: Detected lcore 0 as core 0 on socket 0 00:04:08.338 EAL: Detected lcore 1 as core 0 on socket 0 00:04:08.338 EAL: Detected lcore 2 as core 0 on socket 0 00:04:08.338 EAL: Detected lcore 3 as core 0 on socket 0 00:04:08.338 EAL: Detected lcore 4 as core 0 on socket 0 00:04:08.338 EAL: Detected lcore 5 as core 0 on socket 0 00:04:08.338 EAL: Detected lcore 6 as core 0 on socket 0 00:04:08.338 EAL: Detected lcore 7 as core 0 on socket 0 00:04:08.338 EAL: Detected lcore 8 as core 0 on socket 0 00:04:08.338 EAL: Detected lcore 9 as core 0 on socket 0 00:04:08.338 EAL: Maximum logical cores by configuration: 128 00:04:08.338 EAL: Detected CPU lcores: 10 00:04:08.338 EAL: Detected NUMA nodes: 1 00:04:08.338 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:08.338 EAL: Detected shared linkage of DPDK 00:04:08.338 EAL: No shared files mode enabled, IPC will be disabled 00:04:08.338 EAL: Selected IOVA mode 'PA' 00:04:08.338 EAL: Probing VFIO support... 00:04:08.338 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:08.338 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:08.338 EAL: Ask a virtual area of 0x2e000 bytes 00:04:08.338 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:08.338 EAL: Setting up physically contiguous memory... 00:04:08.338 EAL: Setting maximum number of open files to 524288 00:04:08.338 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:08.338 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:08.338 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.338 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:08.338 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.338 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.338 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:08.338 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:08.338 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.338 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:08.338 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.338 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.338 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:08.338 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:08.338 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.338 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:08.338 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.338 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.338 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:08.338 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:08.338 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.338 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:08.338 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.338 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.338 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:08.338 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:08.338 EAL: Hugepages will be freed exactly as allocated. 00:04:08.338 EAL: No shared files mode enabled, IPC is disabled 00:04:08.338 EAL: No shared files mode enabled, IPC is disabled 00:04:08.602 EAL: TSC frequency is ~2490000 KHz 00:04:08.602 EAL: Main lcore 0 is ready (tid=7f0f29853a40;cpuset=[0]) 00:04:08.602 EAL: Trying to obtain current memory policy. 00:04:08.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.602 EAL: Restoring previous memory policy: 0 00:04:08.602 EAL: request: mp_malloc_sync 00:04:08.602 EAL: No shared files mode enabled, IPC is disabled 00:04:08.602 EAL: Heap on socket 0 was expanded by 2MB 00:04:08.602 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:08.602 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:08.602 EAL: Mem event callback 'spdk:(nil)' registered 00:04:08.602 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:08.602 00:04:08.602 00:04:08.602 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.602 http://cunit.sourceforge.net/ 00:04:08.602 00:04:08.602 00:04:08.602 Suite: components_suite 00:04:08.861 Test: vtophys_malloc_test ...passed 00:04:08.861 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:08.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.861 EAL: Restoring previous memory policy: 4 00:04:08.861 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.861 EAL: request: mp_malloc_sync 00:04:08.861 EAL: No shared files mode enabled, IPC is disabled 00:04:08.861 EAL: Heap on socket 0 was expanded by 4MB 00:04:08.861 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.861 EAL: request: mp_malloc_sync 00:04:08.861 EAL: No shared files mode enabled, IPC is disabled 00:04:08.861 EAL: Heap on socket 0 was shrunk by 4MB 00:04:08.861 EAL: Trying to obtain current memory policy. 00:04:08.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.861 EAL: Restoring previous memory policy: 4 00:04:08.861 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.861 EAL: request: mp_malloc_sync 00:04:08.861 EAL: No shared files mode enabled, IPC is disabled 00:04:08.861 EAL: Heap on socket 0 was expanded by 6MB 00:04:08.861 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.861 EAL: request: mp_malloc_sync 00:04:08.861 EAL: No shared files mode enabled, IPC is disabled 00:04:08.861 EAL: Heap on socket 0 was shrunk by 6MB 00:04:08.861 EAL: Trying to obtain current memory policy. 00:04:08.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.861 EAL: Restoring previous memory policy: 4 00:04:08.861 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.861 EAL: request: mp_malloc_sync 00:04:08.861 EAL: No shared files mode enabled, IPC is disabled 00:04:08.861 EAL: Heap on socket 0 was expanded by 10MB 00:04:09.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.120 EAL: request: mp_malloc_sync 00:04:09.120 EAL: No shared files mode enabled, IPC is disabled 00:04:09.120 EAL: Heap on socket 0 was shrunk by 10MB 00:04:09.120 EAL: Trying to obtain current memory policy. 00:04:09.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.120 EAL: Restoring previous memory policy: 4 00:04:09.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.120 EAL: request: mp_malloc_sync 00:04:09.120 EAL: No shared files mode enabled, IPC is disabled 00:04:09.120 EAL: Heap on socket 0 was expanded by 18MB 00:04:09.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.120 EAL: request: mp_malloc_sync 00:04:09.120 EAL: No shared files mode enabled, IPC is disabled 00:04:09.120 EAL: Heap on socket 0 was shrunk by 18MB 00:04:09.120 EAL: Trying to obtain current memory policy. 00:04:09.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.120 EAL: Restoring previous memory policy: 4 00:04:09.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.120 EAL: request: mp_malloc_sync 00:04:09.120 EAL: No shared files mode enabled, IPC is disabled 00:04:09.120 EAL: Heap on socket 0 was expanded by 34MB 00:04:09.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.120 EAL: request: mp_malloc_sync 00:04:09.120 EAL: No shared files mode enabled, IPC is disabled 00:04:09.120 EAL: Heap on socket 0 was shrunk by 34MB 00:04:09.120 EAL: Trying to obtain current memory policy. 00:04:09.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.120 EAL: Restoring previous memory policy: 4 00:04:09.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.120 EAL: request: mp_malloc_sync 00:04:09.120 EAL: No shared files mode enabled, IPC is disabled 00:04:09.120 EAL: Heap on socket 0 was expanded by 66MB 00:04:09.379 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.379 EAL: request: mp_malloc_sync 00:04:09.379 EAL: No shared files mode enabled, IPC is disabled 00:04:09.379 EAL: Heap on socket 0 was shrunk by 66MB 00:04:09.379 EAL: Trying to obtain current memory policy. 00:04:09.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.379 EAL: Restoring previous memory policy: 4 00:04:09.379 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.379 EAL: request: mp_malloc_sync 00:04:09.379 EAL: No shared files mode enabled, IPC is disabled 00:04:09.379 EAL: Heap on socket 0 was expanded by 130MB 00:04:09.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.636 EAL: request: mp_malloc_sync 00:04:09.636 EAL: No shared files mode enabled, IPC is disabled 00:04:09.636 EAL: Heap on socket 0 was shrunk by 130MB 00:04:09.895 EAL: Trying to obtain current memory policy. 00:04:09.895 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.895 EAL: Restoring previous memory policy: 4 00:04:09.895 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.895 EAL: request: mp_malloc_sync 00:04:09.895 EAL: No shared files mode enabled, IPC is disabled 00:04:09.895 EAL: Heap on socket 0 was expanded by 258MB 00:04:10.462 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.462 EAL: request: mp_malloc_sync 00:04:10.462 EAL: No shared files mode enabled, IPC is disabled 00:04:10.462 EAL: Heap on socket 0 was shrunk by 258MB 00:04:11.030 EAL: Trying to obtain current memory policy. 00:04:11.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.030 EAL: Restoring previous memory policy: 4 00:04:11.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.030 EAL: request: mp_malloc_sync 00:04:11.030 EAL: No shared files mode enabled, IPC is disabled 00:04:11.030 EAL: Heap on socket 0 was expanded by 514MB 00:04:11.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.966 EAL: request: mp_malloc_sync 00:04:11.966 EAL: No shared files mode enabled, IPC is disabled 00:04:11.966 EAL: Heap on socket 0 was shrunk by 514MB 00:04:12.908 EAL: Trying to obtain current memory policy. 00:04:12.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.908 EAL: Restoring previous memory policy: 4 00:04:12.908 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.908 EAL: request: mp_malloc_sync 00:04:12.908 EAL: No shared files mode enabled, IPC is disabled 00:04:12.908 EAL: Heap on socket 0 was expanded by 1026MB 00:04:14.845 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.845 EAL: request: mp_malloc_sync 00:04:14.845 EAL: No shared files mode enabled, IPC is disabled 00:04:14.845 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:16.749 passed 00:04:16.749 00:04:16.749 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.749 suites 1 1 n/a 0 0 00:04:16.749 tests 2 2 2 0 0 00:04:16.749 asserts 5796 5796 5796 0 n/a 00:04:16.749 00:04:16.749 Elapsed time = 7.909 seconds 00:04:16.749 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.749 EAL: request: mp_malloc_sync 00:04:16.749 EAL: No shared files mode enabled, IPC is disabled 00:04:16.749 EAL: Heap on socket 0 was shrunk by 2MB 00:04:16.749 EAL: No shared files mode enabled, IPC is disabled 00:04:16.749 EAL: No shared files mode enabled, IPC is disabled 00:04:16.749 EAL: No shared files mode enabled, IPC is disabled 00:04:16.749 ************************************ 00:04:16.749 END TEST env_vtophys 00:04:16.749 ************************************ 00:04:16.749 00:04:16.749 real 0m8.248s 00:04:16.749 user 0m7.260s 00:04:16.749 sys 0m0.828s 00:04:16.749 13:45:09 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.749 13:45:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:16.749 13:45:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:16.749 13:45:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.749 13:45:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.749 13:45:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.749 ************************************ 00:04:16.749 START TEST env_pci 00:04:16.749 ************************************ 00:04:16.749 13:45:09 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:16.749 00:04:16.749 00:04:16.749 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.749 http://cunit.sourceforge.net/ 00:04:16.749 00:04:16.749 00:04:16.749 Suite: pci 00:04:16.749 Test: pci_hook ...[2024-12-11 13:45:09.577574] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58771 has claimed it 00:04:16.749 passed 00:04:16.749 00:04:16.749 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.749 suites 1 1 n/a 0 0 00:04:16.749 tests 1 1 1 0 0 00:04:16.749 asserts 25 25 25 0 n/a 00:04:16.749 00:04:16.749 Elapsed time = 0.009 seconds 00:04:16.749 EAL: Cannot find device (10000:00:01.0) 00:04:16.749 EAL: Failed to attach device on primary process 00:04:16.749 00:04:16.749 real 0m0.118s 00:04:16.749 user 0m0.044s 00:04:16.749 sys 0m0.073s 00:04:16.749 13:45:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.749 13:45:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:16.749 ************************************ 00:04:16.749 END TEST env_pci 00:04:16.749 ************************************ 00:04:16.749 13:45:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:16.749 13:45:09 env -- env/env.sh@15 -- # uname 00:04:16.749 13:45:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:16.749 13:45:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:16.749 13:45:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.749 13:45:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:16.749 13:45:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.749 13:45:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.749 ************************************ 00:04:16.749 START TEST env_dpdk_post_init 00:04:16.749 ************************************ 00:04:16.749 13:45:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.749 EAL: Detected CPU lcores: 10 00:04:17.008 EAL: Detected NUMA nodes: 1 00:04:17.008 EAL: Detected shared linkage of DPDK 00:04:17.008 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.008 EAL: Selected IOVA mode 'PA' 00:04:17.008 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.008 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:17.008 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:17.008 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:17.008 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:17.008 Starting DPDK initialization... 00:04:17.008 Starting SPDK post initialization... 00:04:17.008 SPDK NVMe probe 00:04:17.008 Attaching to 0000:00:10.0 00:04:17.008 Attaching to 0000:00:11.0 00:04:17.008 Attaching to 0000:00:12.0 00:04:17.008 Attaching to 0000:00:13.0 00:04:17.008 Attached to 0000:00:10.0 00:04:17.008 Attached to 0000:00:11.0 00:04:17.008 Attached to 0000:00:13.0 00:04:17.008 Attached to 0000:00:12.0 00:04:17.008 Cleaning up... 00:04:17.008 00:04:17.008 real 0m0.316s 00:04:17.008 user 0m0.104s 00:04:17.008 sys 0m0.113s 00:04:17.008 13:45:10 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.008 13:45:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.008 ************************************ 00:04:17.008 END TEST env_dpdk_post_init 00:04:17.008 ************************************ 00:04:17.266 13:45:10 env -- env/env.sh@26 -- # uname 00:04:17.266 13:45:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:17.266 13:45:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.266 13:45:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.266 13:45:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.266 13:45:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.266 ************************************ 00:04:17.266 START TEST env_mem_callbacks 00:04:17.266 ************************************ 00:04:17.266 13:45:10 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.266 EAL: Detected CPU lcores: 10 00:04:17.266 EAL: Detected NUMA nodes: 1 00:04:17.266 EAL: Detected shared linkage of DPDK 00:04:17.266 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.266 EAL: Selected IOVA mode 'PA' 00:04:17.266 00:04:17.266 00:04:17.266 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.266 http://cunit.sourceforge.net/ 00:04:17.266 00:04:17.266 00:04:17.266 Suite: memory 00:04:17.266 Test: test ... 00:04:17.266 register 0x200000200000 2097152 00:04:17.266 malloc 3145728 00:04:17.266 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.266 register 0x200000400000 4194304 00:04:17.525 buf 0x2000004fffc0 len 3145728 PASSED 00:04:17.525 malloc 64 00:04:17.525 buf 0x2000004ffec0 len 64 PASSED 00:04:17.525 malloc 4194304 00:04:17.525 register 0x200000800000 6291456 00:04:17.525 buf 0x2000009fffc0 len 4194304 PASSED 00:04:17.525 free 0x2000004fffc0 3145728 00:04:17.525 free 0x2000004ffec0 64 00:04:17.525 unregister 0x200000400000 4194304 PASSED 00:04:17.525 free 0x2000009fffc0 4194304 00:04:17.525 unregister 0x200000800000 6291456 PASSED 00:04:17.525 malloc 8388608 00:04:17.525 register 0x200000400000 10485760 00:04:17.525 buf 0x2000005fffc0 len 8388608 PASSED 00:04:17.525 free 0x2000005fffc0 8388608 00:04:17.525 unregister 0x200000400000 10485760 PASSED 00:04:17.525 passed 00:04:17.525 00:04:17.525 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.525 suites 1 1 n/a 0 0 00:04:17.525 tests 1 1 1 0 0 00:04:17.525 asserts 15 15 15 0 n/a 00:04:17.525 00:04:17.525 Elapsed time = 0.079 seconds 00:04:17.525 00:04:17.525 real 0m0.286s 00:04:17.525 user 0m0.102s 00:04:17.525 sys 0m0.081s 00:04:17.525 13:45:10 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.525 13:45:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:17.525 ************************************ 00:04:17.525 END TEST env_mem_callbacks 00:04:17.525 ************************************ 00:04:17.525 00:04:17.525 real 0m9.841s 00:04:17.525 user 0m7.995s 00:04:17.525 sys 0m1.483s 00:04:17.525 13:45:10 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.525 13:45:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.525 ************************************ 00:04:17.525 END TEST env 00:04:17.525 ************************************ 00:04:17.525 13:45:10 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:17.525 13:45:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.525 13:45:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.525 13:45:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.525 ************************************ 00:04:17.525 START TEST rpc 00:04:17.525 ************************************ 00:04:17.525 13:45:10 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:17.783 * Looking for test storage... 00:04:17.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.783 13:45:10 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:17.783 13:45:10 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:17.783 13:45:10 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.783 13:45:10 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.783 13:45:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.783 13:45:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.783 13:45:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.783 13:45:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.783 13:45:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.783 13:45:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.783 13:45:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.783 13:45:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.783 13:45:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.783 13:45:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.783 13:45:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.783 13:45:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.783 13:45:10 rpc -- scripts/common.sh@345 -- # : 1 00:04:17.783 13:45:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.783 13:45:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.783 13:45:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.783 13:45:10 rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.784 13:45:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.784 13:45:10 rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.784 13:45:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.784 13:45:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.784 13:45:10 rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.784 13:45:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.784 13:45:10 rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.784 13:45:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.784 13:45:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.784 13:45:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.784 13:45:10 rpc -- scripts/common.sh@368 -- # return 0 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.784 --rc genhtml_branch_coverage=1 00:04:17.784 --rc genhtml_function_coverage=1 00:04:17.784 --rc genhtml_legend=1 00:04:17.784 --rc geninfo_all_blocks=1 00:04:17.784 --rc geninfo_unexecuted_blocks=1 00:04:17.784 00:04:17.784 ' 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.784 --rc genhtml_branch_coverage=1 00:04:17.784 --rc genhtml_function_coverage=1 00:04:17.784 --rc genhtml_legend=1 00:04:17.784 --rc geninfo_all_blocks=1 00:04:17.784 --rc geninfo_unexecuted_blocks=1 00:04:17.784 00:04:17.784 ' 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.784 --rc genhtml_branch_coverage=1 00:04:17.784 --rc genhtml_function_coverage=1 00:04:17.784 --rc genhtml_legend=1 00:04:17.784 --rc geninfo_all_blocks=1 00:04:17.784 --rc geninfo_unexecuted_blocks=1 00:04:17.784 00:04:17.784 ' 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.784 --rc genhtml_branch_coverage=1 00:04:17.784 --rc genhtml_function_coverage=1 00:04:17.784 --rc genhtml_legend=1 00:04:17.784 --rc geninfo_all_blocks=1 00:04:17.784 --rc geninfo_unexecuted_blocks=1 00:04:17.784 00:04:17.784 ' 00:04:17.784 13:45:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58903 00:04:17.784 13:45:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.784 13:45:10 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:17.784 13:45:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58903 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@835 -- # '[' -z 58903 ']' 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.784 13:45:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.043 [2024-12-11 13:45:10.858979] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:18.043 [2024-12-11 13:45:10.859374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58903 ] 00:04:18.043 [2024-12-11 13:45:11.043882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.303 [2024-12-11 13:45:11.158245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:18.303 [2024-12-11 13:45:11.158498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58903' to capture a snapshot of events at runtime. 00:04:18.303 [2024-12-11 13:45:11.158715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:18.303 [2024-12-11 13:45:11.158771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:18.303 [2024-12-11 13:45:11.158801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58903 for offline analysis/debug. 00:04:18.303 [2024-12-11 13:45:11.160197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.243 13:45:12 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.243 13:45:12 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:19.243 13:45:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.243 13:45:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.243 13:45:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:19.243 13:45:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:19.243 13:45:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.243 13:45:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.243 13:45:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.243 ************************************ 00:04:19.243 START TEST rpc_integrity 00:04:19.243 ************************************ 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.243 { 00:04:19.243 "name": "Malloc0", 00:04:19.243 "aliases": [ 00:04:19.243 "ecbcabff-bcd2-4446-a7b6-c420ee9192f9" 00:04:19.243 ], 00:04:19.243 "product_name": "Malloc disk", 00:04:19.243 "block_size": 512, 00:04:19.243 "num_blocks": 16384, 00:04:19.243 "uuid": "ecbcabff-bcd2-4446-a7b6-c420ee9192f9", 00:04:19.243 "assigned_rate_limits": { 00:04:19.243 "rw_ios_per_sec": 0, 00:04:19.243 "rw_mbytes_per_sec": 0, 00:04:19.243 "r_mbytes_per_sec": 0, 00:04:19.243 "w_mbytes_per_sec": 0 00:04:19.243 }, 00:04:19.243 "claimed": false, 00:04:19.243 "zoned": false, 00:04:19.243 "supported_io_types": { 00:04:19.243 "read": true, 00:04:19.243 "write": true, 00:04:19.243 "unmap": true, 00:04:19.243 "flush": true, 00:04:19.243 "reset": true, 00:04:19.243 "nvme_admin": false, 00:04:19.243 "nvme_io": false, 00:04:19.243 "nvme_io_md": false, 00:04:19.243 "write_zeroes": true, 00:04:19.243 "zcopy": true, 00:04:19.243 "get_zone_info": false, 00:04:19.243 "zone_management": false, 00:04:19.243 "zone_append": false, 00:04:19.243 "compare": false, 00:04:19.243 "compare_and_write": false, 00:04:19.243 "abort": true, 00:04:19.243 "seek_hole": false, 00:04:19.243 "seek_data": false, 00:04:19.243 "copy": true, 00:04:19.243 "nvme_iov_md": false 00:04:19.243 }, 00:04:19.243 "memory_domains": [ 00:04:19.243 { 00:04:19.243 "dma_device_id": "system", 00:04:19.243 "dma_device_type": 1 00:04:19.243 }, 00:04:19.243 { 00:04:19.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.243 "dma_device_type": 2 00:04:19.243 } 00:04:19.243 ], 00:04:19.243 "driver_specific": {} 00:04:19.243 } 00:04:19.243 ]' 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.243 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.243 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.243 [2024-12-11 13:45:12.230112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:19.244 [2024-12-11 13:45:12.230179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.244 [2024-12-11 13:45:12.230207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:19.244 [2024-12-11 13:45:12.230221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.244 [2024-12-11 13:45:12.232778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.244 [2024-12-11 13:45:12.232835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.244 Passthru0 00:04:19.244 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.244 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.244 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.244 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.244 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.244 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.244 { 00:04:19.244 "name": "Malloc0", 00:04:19.244 "aliases": [ 00:04:19.244 "ecbcabff-bcd2-4446-a7b6-c420ee9192f9" 00:04:19.244 ], 00:04:19.244 "product_name": "Malloc disk", 00:04:19.244 "block_size": 512, 00:04:19.244 "num_blocks": 16384, 00:04:19.244 "uuid": "ecbcabff-bcd2-4446-a7b6-c420ee9192f9", 00:04:19.244 "assigned_rate_limits": { 00:04:19.244 "rw_ios_per_sec": 0, 00:04:19.244 "rw_mbytes_per_sec": 0, 00:04:19.244 "r_mbytes_per_sec": 0, 00:04:19.244 "w_mbytes_per_sec": 0 00:04:19.244 }, 00:04:19.244 "claimed": true, 00:04:19.244 "claim_type": "exclusive_write", 00:04:19.244 "zoned": false, 00:04:19.244 "supported_io_types": { 00:04:19.244 "read": true, 00:04:19.244 "write": true, 00:04:19.244 "unmap": true, 00:04:19.244 "flush": true, 00:04:19.244 "reset": true, 00:04:19.244 "nvme_admin": false, 00:04:19.244 "nvme_io": false, 00:04:19.244 "nvme_io_md": false, 00:04:19.244 "write_zeroes": true, 00:04:19.244 "zcopy": true, 00:04:19.244 "get_zone_info": false, 00:04:19.244 "zone_management": false, 00:04:19.244 "zone_append": false, 00:04:19.244 "compare": false, 00:04:19.244 "compare_and_write": false, 00:04:19.244 "abort": true, 00:04:19.244 "seek_hole": false, 00:04:19.244 "seek_data": false, 00:04:19.244 "copy": true, 00:04:19.244 "nvme_iov_md": false 00:04:19.244 }, 00:04:19.244 "memory_domains": [ 00:04:19.244 { 00:04:19.244 "dma_device_id": "system", 00:04:19.244 "dma_device_type": 1 00:04:19.244 }, 00:04:19.244 { 00:04:19.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.244 "dma_device_type": 2 00:04:19.244 } 00:04:19.244 ], 00:04:19.244 "driver_specific": {} 00:04:19.244 }, 00:04:19.244 { 00:04:19.244 "name": "Passthru0", 00:04:19.244 "aliases": [ 00:04:19.244 "409dda75-8b0a-5c06-b43c-d8271e789891" 00:04:19.244 ], 00:04:19.244 "product_name": "passthru", 00:04:19.244 "block_size": 512, 00:04:19.244 "num_blocks": 16384, 00:04:19.244 "uuid": "409dda75-8b0a-5c06-b43c-d8271e789891", 00:04:19.244 "assigned_rate_limits": { 00:04:19.244 "rw_ios_per_sec": 0, 00:04:19.244 "rw_mbytes_per_sec": 0, 00:04:19.244 "r_mbytes_per_sec": 0, 00:04:19.244 "w_mbytes_per_sec": 0 00:04:19.244 }, 00:04:19.244 "claimed": false, 00:04:19.244 "zoned": false, 00:04:19.244 "supported_io_types": { 00:04:19.244 "read": true, 00:04:19.244 "write": true, 00:04:19.244 "unmap": true, 00:04:19.244 "flush": true, 00:04:19.244 "reset": true, 00:04:19.244 "nvme_admin": false, 00:04:19.244 "nvme_io": false, 00:04:19.244 "nvme_io_md": false, 00:04:19.244 "write_zeroes": true, 00:04:19.244 "zcopy": true, 00:04:19.244 "get_zone_info": false, 00:04:19.244 "zone_management": false, 00:04:19.244 "zone_append": false, 00:04:19.244 "compare": false, 00:04:19.244 "compare_and_write": false, 00:04:19.244 "abort": true, 00:04:19.244 "seek_hole": false, 00:04:19.244 "seek_data": false, 00:04:19.244 "copy": true, 00:04:19.244 "nvme_iov_md": false 00:04:19.244 }, 00:04:19.244 "memory_domains": [ 00:04:19.244 { 00:04:19.244 "dma_device_id": "system", 00:04:19.244 "dma_device_type": 1 00:04:19.244 }, 00:04:19.244 { 00:04:19.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.244 "dma_device_type": 2 00:04:19.244 } 00:04:19.244 ], 00:04:19.244 "driver_specific": { 00:04:19.244 "passthru": { 00:04:19.244 "name": "Passthru0", 00:04:19.244 "base_bdev_name": "Malloc0" 00:04:19.244 } 00:04:19.244 } 00:04:19.244 } 00:04:19.244 ]' 00:04:19.244 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:19.505 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.505 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.505 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.505 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.505 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.505 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:19.505 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.505 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.505 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.505 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.505 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.505 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.505 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.505 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.505 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:19.505 ************************************ 00:04:19.505 END TEST rpc_integrity 00:04:19.505 ************************************ 00:04:19.505 13:45:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.505 00:04:19.505 real 0m0.359s 00:04:19.505 user 0m0.191s 00:04:19.505 sys 0m0.062s 00:04:19.505 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.506 13:45:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.506 13:45:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:19.506 13:45:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.506 13:45:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.506 13:45:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.506 ************************************ 00:04:19.506 START TEST rpc_plugins 00:04:19.506 ************************************ 00:04:19.506 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:19.506 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:19.506 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.506 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.506 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.506 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:19.506 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:19.506 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.506 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.765 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.765 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:19.765 { 00:04:19.765 "name": "Malloc1", 00:04:19.765 "aliases": [ 00:04:19.765 "0f2e609d-2060-4a73-ac2a-3288fee4b5d5" 00:04:19.765 ], 00:04:19.765 "product_name": "Malloc disk", 00:04:19.765 "block_size": 4096, 00:04:19.765 "num_blocks": 256, 00:04:19.765 "uuid": "0f2e609d-2060-4a73-ac2a-3288fee4b5d5", 00:04:19.765 "assigned_rate_limits": { 00:04:19.765 "rw_ios_per_sec": 0, 00:04:19.765 "rw_mbytes_per_sec": 0, 00:04:19.765 "r_mbytes_per_sec": 0, 00:04:19.765 "w_mbytes_per_sec": 0 00:04:19.765 }, 00:04:19.765 "claimed": false, 00:04:19.765 "zoned": false, 00:04:19.765 "supported_io_types": { 00:04:19.765 "read": true, 00:04:19.765 "write": true, 00:04:19.765 "unmap": true, 00:04:19.765 "flush": true, 00:04:19.765 "reset": true, 00:04:19.765 "nvme_admin": false, 00:04:19.765 "nvme_io": false, 00:04:19.765 "nvme_io_md": false, 00:04:19.765 "write_zeroes": true, 00:04:19.765 "zcopy": true, 00:04:19.765 "get_zone_info": false, 00:04:19.765 "zone_management": false, 00:04:19.765 "zone_append": false, 00:04:19.765 "compare": false, 00:04:19.765 "compare_and_write": false, 00:04:19.765 "abort": true, 00:04:19.765 "seek_hole": false, 00:04:19.765 "seek_data": false, 00:04:19.765 "copy": true, 00:04:19.765 "nvme_iov_md": false 00:04:19.765 }, 00:04:19.765 "memory_domains": [ 00:04:19.765 { 00:04:19.765 "dma_device_id": "system", 00:04:19.765 "dma_device_type": 1 00:04:19.766 }, 00:04:19.766 { 00:04:19.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.766 "dma_device_type": 2 00:04:19.766 } 00:04:19.766 ], 00:04:19.766 "driver_specific": {} 00:04:19.766 } 00:04:19.766 ]' 00:04:19.766 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:19.766 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:19.766 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:19.766 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.766 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.766 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.766 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:19.766 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.766 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.766 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.766 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:19.766 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:19.766 ************************************ 00:04:19.766 END TEST rpc_plugins 00:04:19.766 ************************************ 00:04:19.766 13:45:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:19.766 00:04:19.766 real 0m0.172s 00:04:19.766 user 0m0.089s 00:04:19.766 sys 0m0.036s 00:04:19.766 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.766 13:45:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.766 13:45:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:19.766 13:45:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.766 13:45:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.766 13:45:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.766 ************************************ 00:04:19.766 START TEST rpc_trace_cmd_test 00:04:19.766 ************************************ 00:04:19.766 13:45:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:19.766 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:19.766 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:19.766 13:45:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.766 13:45:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:19.766 13:45:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.766 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:19.766 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58903", 00:04:19.766 "tpoint_group_mask": "0x8", 00:04:19.766 "iscsi_conn": { 00:04:19.766 "mask": "0x2", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "scsi": { 00:04:19.766 "mask": "0x4", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "bdev": { 00:04:19.766 "mask": "0x8", 00:04:19.766 "tpoint_mask": "0xffffffffffffffff" 00:04:19.766 }, 00:04:19.766 "nvmf_rdma": { 00:04:19.766 "mask": "0x10", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "nvmf_tcp": { 00:04:19.766 "mask": "0x20", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "ftl": { 00:04:19.766 "mask": "0x40", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "blobfs": { 00:04:19.766 "mask": "0x80", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "dsa": { 00:04:19.766 "mask": "0x200", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "thread": { 00:04:19.766 "mask": "0x400", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "nvme_pcie": { 00:04:19.766 "mask": "0x800", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "iaa": { 00:04:19.766 "mask": "0x1000", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "nvme_tcp": { 00:04:19.766 "mask": "0x2000", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "bdev_nvme": { 00:04:19.766 "mask": "0x4000", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "sock": { 00:04:19.766 "mask": "0x8000", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "blob": { 00:04:19.766 "mask": "0x10000", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "bdev_raid": { 00:04:19.766 "mask": "0x20000", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 }, 00:04:19.766 "scheduler": { 00:04:19.766 "mask": "0x40000", 00:04:19.766 "tpoint_mask": "0x0" 00:04:19.766 } 00:04:19.766 }' 00:04:19.766 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:19.766 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:19.766 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:20.025 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:20.025 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:20.025 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:20.025 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:20.025 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:20.025 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:20.025 ************************************ 00:04:20.025 END TEST rpc_trace_cmd_test 00:04:20.025 ************************************ 00:04:20.025 13:45:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:20.025 00:04:20.025 real 0m0.236s 00:04:20.025 user 0m0.185s 00:04:20.025 sys 0m0.040s 00:04:20.025 13:45:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.025 13:45:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:20.025 13:45:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:20.025 13:45:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:20.025 13:45:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:20.025 13:45:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.025 13:45:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.025 13:45:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.025 ************************************ 00:04:20.025 START TEST rpc_daemon_integrity 00:04:20.025 ************************************ 00:04:20.025 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:20.025 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.025 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.025 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.025 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.025 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.025 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.285 { 00:04:20.285 "name": "Malloc2", 00:04:20.285 "aliases": [ 00:04:20.285 "85d44fe3-efd8-4008-985e-c673050b194b" 00:04:20.285 ], 00:04:20.285 "product_name": "Malloc disk", 00:04:20.285 "block_size": 512, 00:04:20.285 "num_blocks": 16384, 00:04:20.285 "uuid": "85d44fe3-efd8-4008-985e-c673050b194b", 00:04:20.285 "assigned_rate_limits": { 00:04:20.285 "rw_ios_per_sec": 0, 00:04:20.285 "rw_mbytes_per_sec": 0, 00:04:20.285 "r_mbytes_per_sec": 0, 00:04:20.285 "w_mbytes_per_sec": 0 00:04:20.285 }, 00:04:20.285 "claimed": false, 00:04:20.285 "zoned": false, 00:04:20.285 "supported_io_types": { 00:04:20.285 "read": true, 00:04:20.285 "write": true, 00:04:20.285 "unmap": true, 00:04:20.285 "flush": true, 00:04:20.285 "reset": true, 00:04:20.285 "nvme_admin": false, 00:04:20.285 "nvme_io": false, 00:04:20.285 "nvme_io_md": false, 00:04:20.285 "write_zeroes": true, 00:04:20.285 "zcopy": true, 00:04:20.285 "get_zone_info": false, 00:04:20.285 "zone_management": false, 00:04:20.285 "zone_append": false, 00:04:20.285 "compare": false, 00:04:20.285 "compare_and_write": false, 00:04:20.285 "abort": true, 00:04:20.285 "seek_hole": false, 00:04:20.285 "seek_data": false, 00:04:20.285 "copy": true, 00:04:20.285 "nvme_iov_md": false 00:04:20.285 }, 00:04:20.285 "memory_domains": [ 00:04:20.285 { 00:04:20.285 "dma_device_id": "system", 00:04:20.285 "dma_device_type": 1 00:04:20.285 }, 00:04:20.285 { 00:04:20.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.285 "dma_device_type": 2 00:04:20.285 } 00:04:20.285 ], 00:04:20.285 "driver_specific": {} 00:04:20.285 } 00:04:20.285 ]' 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.285 [2024-12-11 13:45:13.213968] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:20.285 [2024-12-11 13:45:13.214042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.285 [2024-12-11 13:45:13.214066] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:20.285 [2024-12-11 13:45:13.214084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.285 [2024-12-11 13:45:13.216597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.285 [2024-12-11 13:45:13.216749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.285 Passthru0 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.285 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.285 { 00:04:20.285 "name": "Malloc2", 00:04:20.285 "aliases": [ 00:04:20.285 "85d44fe3-efd8-4008-985e-c673050b194b" 00:04:20.285 ], 00:04:20.285 "product_name": "Malloc disk", 00:04:20.285 "block_size": 512, 00:04:20.285 "num_blocks": 16384, 00:04:20.285 "uuid": "85d44fe3-efd8-4008-985e-c673050b194b", 00:04:20.285 "assigned_rate_limits": { 00:04:20.285 "rw_ios_per_sec": 0, 00:04:20.285 "rw_mbytes_per_sec": 0, 00:04:20.285 "r_mbytes_per_sec": 0, 00:04:20.285 "w_mbytes_per_sec": 0 00:04:20.285 }, 00:04:20.285 "claimed": true, 00:04:20.285 "claim_type": "exclusive_write", 00:04:20.285 "zoned": false, 00:04:20.285 "supported_io_types": { 00:04:20.285 "read": true, 00:04:20.285 "write": true, 00:04:20.285 "unmap": true, 00:04:20.285 "flush": true, 00:04:20.286 "reset": true, 00:04:20.286 "nvme_admin": false, 00:04:20.286 "nvme_io": false, 00:04:20.286 "nvme_io_md": false, 00:04:20.286 "write_zeroes": true, 00:04:20.286 "zcopy": true, 00:04:20.286 "get_zone_info": false, 00:04:20.286 "zone_management": false, 00:04:20.286 "zone_append": false, 00:04:20.286 "compare": false, 00:04:20.286 "compare_and_write": false, 00:04:20.286 "abort": true, 00:04:20.286 "seek_hole": false, 00:04:20.286 "seek_data": false, 00:04:20.286 "copy": true, 00:04:20.286 "nvme_iov_md": false 00:04:20.286 }, 00:04:20.286 "memory_domains": [ 00:04:20.286 { 00:04:20.286 "dma_device_id": "system", 00:04:20.286 "dma_device_type": 1 00:04:20.286 }, 00:04:20.286 { 00:04:20.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.286 "dma_device_type": 2 00:04:20.286 } 00:04:20.286 ], 00:04:20.286 "driver_specific": {} 00:04:20.286 }, 00:04:20.286 { 00:04:20.286 "name": "Passthru0", 00:04:20.286 "aliases": [ 00:04:20.286 "f1db3020-a833-55d1-93a5-75b89b5babe5" 00:04:20.286 ], 00:04:20.286 "product_name": "passthru", 00:04:20.286 "block_size": 512, 00:04:20.286 "num_blocks": 16384, 00:04:20.286 "uuid": "f1db3020-a833-55d1-93a5-75b89b5babe5", 00:04:20.286 "assigned_rate_limits": { 00:04:20.286 "rw_ios_per_sec": 0, 00:04:20.286 "rw_mbytes_per_sec": 0, 00:04:20.286 "r_mbytes_per_sec": 0, 00:04:20.286 "w_mbytes_per_sec": 0 00:04:20.286 }, 00:04:20.286 "claimed": false, 00:04:20.286 "zoned": false, 00:04:20.286 "supported_io_types": { 00:04:20.286 "read": true, 00:04:20.286 "write": true, 00:04:20.286 "unmap": true, 00:04:20.286 "flush": true, 00:04:20.286 "reset": true, 00:04:20.286 "nvme_admin": false, 00:04:20.286 "nvme_io": false, 00:04:20.286 "nvme_io_md": false, 00:04:20.286 "write_zeroes": true, 00:04:20.286 "zcopy": true, 00:04:20.286 "get_zone_info": false, 00:04:20.286 "zone_management": false, 00:04:20.286 "zone_append": false, 00:04:20.286 "compare": false, 00:04:20.286 "compare_and_write": false, 00:04:20.286 "abort": true, 00:04:20.286 "seek_hole": false, 00:04:20.286 "seek_data": false, 00:04:20.286 "copy": true, 00:04:20.286 "nvme_iov_md": false 00:04:20.286 }, 00:04:20.286 "memory_domains": [ 00:04:20.286 { 00:04:20.286 "dma_device_id": "system", 00:04:20.286 "dma_device_type": 1 00:04:20.286 }, 00:04:20.286 { 00:04:20.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.286 "dma_device_type": 2 00:04:20.286 } 00:04:20.286 ], 00:04:20.286 "driver_specific": { 00:04:20.286 "passthru": { 00:04:20.286 "name": "Passthru0", 00:04:20.286 "base_bdev_name": "Malloc2" 00:04:20.286 } 00:04:20.286 } 00:04:20.286 } 00:04:20.286 ]' 00:04:20.286 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.286 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.286 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.286 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.286 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.286 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.286 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:20.286 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.286 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.574 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.574 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.574 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.574 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.574 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.574 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.574 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:20.574 ************************************ 00:04:20.574 END TEST rpc_daemon_integrity 00:04:20.574 ************************************ 00:04:20.574 13:45:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.574 00:04:20.574 real 0m0.376s 00:04:20.574 user 0m0.197s 00:04:20.574 sys 0m0.069s 00:04:20.574 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.574 13:45:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.574 13:45:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:20.574 13:45:13 rpc -- rpc/rpc.sh@84 -- # killprocess 58903 00:04:20.574 13:45:13 rpc -- common/autotest_common.sh@954 -- # '[' -z 58903 ']' 00:04:20.574 13:45:13 rpc -- common/autotest_common.sh@958 -- # kill -0 58903 00:04:20.574 13:45:13 rpc -- common/autotest_common.sh@959 -- # uname 00:04:20.574 13:45:13 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.574 13:45:13 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58903 00:04:20.574 killing process with pid 58903 00:04:20.574 13:45:13 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.574 13:45:13 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.574 13:45:13 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58903' 00:04:20.574 13:45:13 rpc -- common/autotest_common.sh@973 -- # kill 58903 00:04:20.574 13:45:13 rpc -- common/autotest_common.sh@978 -- # wait 58903 00:04:23.112 00:04:23.112 real 0m5.302s 00:04:23.112 user 0m5.834s 00:04:23.112 sys 0m0.961s 00:04:23.112 13:45:15 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.112 ************************************ 00:04:23.112 13:45:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.112 END TEST rpc 00:04:23.112 ************************************ 00:04:23.112 13:45:15 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.112 13:45:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.112 13:45:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.112 13:45:15 -- common/autotest_common.sh@10 -- # set +x 00:04:23.112 ************************************ 00:04:23.112 START TEST skip_rpc 00:04:23.112 ************************************ 00:04:23.112 13:45:15 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.112 * Looking for test storage... 00:04:23.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.112 13:45:16 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.112 13:45:16 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.112 13:45:16 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.112 13:45:16 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.112 13:45:16 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:23.112 13:45:16 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.112 13:45:16 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.112 --rc genhtml_branch_coverage=1 00:04:23.113 --rc genhtml_function_coverage=1 00:04:23.113 --rc genhtml_legend=1 00:04:23.113 --rc geninfo_all_blocks=1 00:04:23.113 --rc geninfo_unexecuted_blocks=1 00:04:23.113 00:04:23.113 ' 00:04:23.113 13:45:16 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.113 --rc genhtml_branch_coverage=1 00:04:23.113 --rc genhtml_function_coverage=1 00:04:23.113 --rc genhtml_legend=1 00:04:23.113 --rc geninfo_all_blocks=1 00:04:23.113 --rc geninfo_unexecuted_blocks=1 00:04:23.113 00:04:23.113 ' 00:04:23.113 13:45:16 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.113 --rc genhtml_branch_coverage=1 00:04:23.113 --rc genhtml_function_coverage=1 00:04:23.113 --rc genhtml_legend=1 00:04:23.113 --rc geninfo_all_blocks=1 00:04:23.113 --rc geninfo_unexecuted_blocks=1 00:04:23.113 00:04:23.113 ' 00:04:23.113 13:45:16 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.113 --rc genhtml_branch_coverage=1 00:04:23.113 --rc genhtml_function_coverage=1 00:04:23.113 --rc genhtml_legend=1 00:04:23.113 --rc geninfo_all_blocks=1 00:04:23.113 --rc geninfo_unexecuted_blocks=1 00:04:23.113 00:04:23.113 ' 00:04:23.113 13:45:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.113 13:45:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.113 13:45:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.113 13:45:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.113 13:45:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.113 13:45:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.113 ************************************ 00:04:23.113 START TEST skip_rpc 00:04:23.113 ************************************ 00:04:23.113 13:45:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:23.113 13:45:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59138 00:04:23.113 13:45:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.113 13:45:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.113 13:45:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.372 [2024-12-11 13:45:16.248501] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:23.372 [2024-12-11 13:45:16.248817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59138 ] 00:04:23.372 [2024-12-11 13:45:16.415040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.635 [2024-12-11 13:45:16.525664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59138 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59138 ']' 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59138 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59138 00:04:28.953 killing process with pid 59138 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59138' 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59138 00:04:28.953 13:45:21 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59138 00:04:30.858 ************************************ 00:04:30.858 END TEST skip_rpc 00:04:30.858 ************************************ 00:04:30.858 00:04:30.858 real 0m7.422s 00:04:30.858 user 0m6.932s 00:04:30.858 sys 0m0.413s 00:04:30.858 13:45:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.858 13:45:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.858 13:45:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:30.858 13:45:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.858 13:45:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.858 13:45:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.858 ************************************ 00:04:30.858 START TEST skip_rpc_with_json 00:04:30.858 ************************************ 00:04:30.858 13:45:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:30.858 13:45:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:30.858 13:45:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59242 00:04:30.858 13:45:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.858 13:45:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.859 13:45:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59242 00:04:30.859 13:45:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59242 ']' 00:04:30.859 13:45:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.859 13:45:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.859 13:45:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.859 13:45:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.859 13:45:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.859 [2024-12-11 13:45:23.741126] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:30.859 [2024-12-11 13:45:23.741462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59242 ] 00:04:31.117 [2024-12-11 13:45:23.913241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.117 [2024-12-11 13:45:24.024054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.053 13:45:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.054 [2024-12-11 13:45:24.887039] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:32.054 request: 00:04:32.054 { 00:04:32.054 "trtype": "tcp", 00:04:32.054 "method": "nvmf_get_transports", 00:04:32.054 "req_id": 1 00:04:32.054 } 00:04:32.054 Got JSON-RPC error response 00:04:32.054 response: 00:04:32.054 { 00:04:32.054 "code": -19, 00:04:32.054 "message": "No such device" 00:04:32.054 } 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.054 [2024-12-11 13:45:24.903124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.054 13:45:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.054 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.054 13:45:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.054 { 00:04:32.054 "subsystems": [ 00:04:32.054 { 00:04:32.054 "subsystem": "fsdev", 00:04:32.054 "config": [ 00:04:32.054 { 00:04:32.054 "method": "fsdev_set_opts", 00:04:32.054 "params": { 00:04:32.054 "fsdev_io_pool_size": 65535, 00:04:32.054 "fsdev_io_cache_size": 256 00:04:32.054 } 00:04:32.054 } 00:04:32.054 ] 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "subsystem": "keyring", 00:04:32.054 "config": [] 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "subsystem": "iobuf", 00:04:32.054 "config": [ 00:04:32.054 { 00:04:32.054 "method": "iobuf_set_options", 00:04:32.054 "params": { 00:04:32.054 "small_pool_count": 8192, 00:04:32.054 "large_pool_count": 1024, 00:04:32.054 "small_bufsize": 8192, 00:04:32.054 "large_bufsize": 135168, 00:04:32.054 "enable_numa": false 00:04:32.054 } 00:04:32.054 } 00:04:32.054 ] 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "subsystem": "sock", 00:04:32.054 "config": [ 00:04:32.054 { 00:04:32.054 "method": "sock_set_default_impl", 00:04:32.054 "params": { 00:04:32.054 "impl_name": "posix" 00:04:32.054 } 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "method": "sock_impl_set_options", 00:04:32.054 "params": { 00:04:32.054 "impl_name": "ssl", 00:04:32.054 "recv_buf_size": 4096, 00:04:32.054 "send_buf_size": 4096, 00:04:32.054 "enable_recv_pipe": true, 00:04:32.054 "enable_quickack": false, 00:04:32.054 "enable_placement_id": 0, 00:04:32.054 "enable_zerocopy_send_server": true, 00:04:32.054 "enable_zerocopy_send_client": false, 00:04:32.054 "zerocopy_threshold": 0, 00:04:32.054 "tls_version": 0, 00:04:32.054 "enable_ktls": false 00:04:32.054 } 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "method": "sock_impl_set_options", 00:04:32.054 "params": { 00:04:32.054 "impl_name": "posix", 00:04:32.054 "recv_buf_size": 2097152, 00:04:32.054 "send_buf_size": 2097152, 00:04:32.054 "enable_recv_pipe": true, 00:04:32.054 "enable_quickack": false, 00:04:32.054 "enable_placement_id": 0, 00:04:32.054 "enable_zerocopy_send_server": true, 00:04:32.054 "enable_zerocopy_send_client": false, 00:04:32.054 "zerocopy_threshold": 0, 00:04:32.054 "tls_version": 0, 00:04:32.054 "enable_ktls": false 00:04:32.054 } 00:04:32.054 } 00:04:32.054 ] 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "subsystem": "vmd", 00:04:32.054 "config": [] 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "subsystem": "accel", 00:04:32.054 "config": [ 00:04:32.054 { 00:04:32.054 "method": "accel_set_options", 00:04:32.054 "params": { 00:04:32.054 "small_cache_size": 128, 00:04:32.054 "large_cache_size": 16, 00:04:32.054 "task_count": 2048, 00:04:32.054 "sequence_count": 2048, 00:04:32.054 "buf_count": 2048 00:04:32.054 } 00:04:32.054 } 00:04:32.054 ] 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "subsystem": "bdev", 00:04:32.054 "config": [ 00:04:32.054 { 00:04:32.054 "method": "bdev_set_options", 00:04:32.054 "params": { 00:04:32.054 "bdev_io_pool_size": 65535, 00:04:32.054 "bdev_io_cache_size": 256, 00:04:32.054 "bdev_auto_examine": true, 00:04:32.054 "iobuf_small_cache_size": 128, 00:04:32.054 "iobuf_large_cache_size": 16 00:04:32.054 } 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "method": "bdev_raid_set_options", 00:04:32.054 "params": { 00:04:32.054 "process_window_size_kb": 1024, 00:04:32.054 "process_max_bandwidth_mb_sec": 0 00:04:32.054 } 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "method": "bdev_iscsi_set_options", 00:04:32.054 "params": { 00:04:32.054 "timeout_sec": 30 00:04:32.054 } 00:04:32.054 }, 00:04:32.054 { 00:04:32.054 "method": "bdev_nvme_set_options", 00:04:32.054 "params": { 00:04:32.054 "action_on_timeout": "none", 00:04:32.054 "timeout_us": 0, 00:04:32.054 "timeout_admin_us": 0, 00:04:32.054 "keep_alive_timeout_ms": 10000, 00:04:32.054 "arbitration_burst": 0, 00:04:32.054 "low_priority_weight": 0, 00:04:32.054 "medium_priority_weight": 0, 00:04:32.054 "high_priority_weight": 0, 00:04:32.054 "nvme_adminq_poll_period_us": 10000, 00:04:32.054 "nvme_ioq_poll_period_us": 0, 00:04:32.054 "io_queue_requests": 0, 00:04:32.054 "delay_cmd_submit": true, 00:04:32.054 "transport_retry_count": 4, 00:04:32.054 "bdev_retry_count": 3, 00:04:32.054 "transport_ack_timeout": 0, 00:04:32.054 "ctrlr_loss_timeout_sec": 0, 00:04:32.054 "reconnect_delay_sec": 0, 00:04:32.054 "fast_io_fail_timeout_sec": 0, 00:04:32.054 "disable_auto_failback": false, 00:04:32.054 "generate_uuids": false, 00:04:32.054 "transport_tos": 0, 00:04:32.054 "nvme_error_stat": false, 00:04:32.054 "rdma_srq_size": 0, 00:04:32.054 "io_path_stat": false, 00:04:32.054 "allow_accel_sequence": false, 00:04:32.054 "rdma_max_cq_size": 0, 00:04:32.054 "rdma_cm_event_timeout_ms": 0, 00:04:32.054 "dhchap_digests": [ 00:04:32.054 "sha256", 00:04:32.054 "sha384", 00:04:32.054 "sha512" 00:04:32.054 ], 00:04:32.054 "dhchap_dhgroups": [ 00:04:32.054 "null", 00:04:32.054 "ffdhe2048", 00:04:32.054 "ffdhe3072", 00:04:32.054 "ffdhe4096", 00:04:32.054 "ffdhe6144", 00:04:32.054 "ffdhe8192" 00:04:32.054 ], 00:04:32.054 "rdma_umr_per_io": false 00:04:32.054 } 00:04:32.054 }, 00:04:32.055 { 00:04:32.055 "method": "bdev_nvme_set_hotplug", 00:04:32.055 "params": { 00:04:32.055 "period_us": 100000, 00:04:32.055 "enable": false 00:04:32.055 } 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "method": "bdev_wait_for_examine" 00:04:32.055 } 00:04:32.055 ] 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "subsystem": "scsi", 00:04:32.055 "config": null 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "subsystem": "scheduler", 00:04:32.055 "config": [ 00:04:32.055 { 00:04:32.055 "method": "framework_set_scheduler", 00:04:32.055 "params": { 00:04:32.055 "name": "static" 00:04:32.055 } 00:04:32.055 } 00:04:32.055 ] 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "subsystem": "vhost_scsi", 00:04:32.055 "config": [] 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "subsystem": "vhost_blk", 00:04:32.055 "config": [] 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "subsystem": "ublk", 00:04:32.055 "config": [] 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "subsystem": "nbd", 00:04:32.055 "config": [] 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "subsystem": "nvmf", 00:04:32.055 "config": [ 00:04:32.055 { 00:04:32.055 "method": "nvmf_set_config", 00:04:32.055 "params": { 00:04:32.055 "discovery_filter": "match_any", 00:04:32.055 "admin_cmd_passthru": { 00:04:32.055 "identify_ctrlr": false 00:04:32.055 }, 00:04:32.055 "dhchap_digests": [ 00:04:32.055 "sha256", 00:04:32.055 "sha384", 00:04:32.055 "sha512" 00:04:32.055 ], 00:04:32.055 "dhchap_dhgroups": [ 00:04:32.055 "null", 00:04:32.055 "ffdhe2048", 00:04:32.055 "ffdhe3072", 00:04:32.055 "ffdhe4096", 00:04:32.055 "ffdhe6144", 00:04:32.055 "ffdhe8192" 00:04:32.055 ] 00:04:32.055 } 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "method": "nvmf_set_max_subsystems", 00:04:32.055 "params": { 00:04:32.055 "max_subsystems": 1024 00:04:32.055 } 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "method": "nvmf_set_crdt", 00:04:32.055 "params": { 00:04:32.055 "crdt1": 0, 00:04:32.055 "crdt2": 0, 00:04:32.055 "crdt3": 0 00:04:32.055 } 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "method": "nvmf_create_transport", 00:04:32.055 "params": { 00:04:32.055 "trtype": "TCP", 00:04:32.055 "max_queue_depth": 128, 00:04:32.055 "max_io_qpairs_per_ctrlr": 127, 00:04:32.055 "in_capsule_data_size": 4096, 00:04:32.055 "max_io_size": 131072, 00:04:32.055 "io_unit_size": 131072, 00:04:32.055 "max_aq_depth": 128, 00:04:32.055 "num_shared_buffers": 511, 00:04:32.055 "buf_cache_size": 4294967295, 00:04:32.055 "dif_insert_or_strip": false, 00:04:32.055 "zcopy": false, 00:04:32.055 "c2h_success": true, 00:04:32.055 "sock_priority": 0, 00:04:32.055 "abort_timeout_sec": 1, 00:04:32.055 "ack_timeout": 0, 00:04:32.055 "data_wr_pool_size": 0 00:04:32.055 } 00:04:32.055 } 00:04:32.055 ] 00:04:32.055 }, 00:04:32.055 { 00:04:32.055 "subsystem": "iscsi", 00:04:32.055 "config": [ 00:04:32.055 { 00:04:32.055 "method": "iscsi_set_options", 00:04:32.055 "params": { 00:04:32.055 "node_base": "iqn.2016-06.io.spdk", 00:04:32.055 "max_sessions": 128, 00:04:32.055 "max_connections_per_session": 2, 00:04:32.055 "max_queue_depth": 64, 00:04:32.055 "default_time2wait": 2, 00:04:32.055 "default_time2retain": 20, 00:04:32.055 "first_burst_length": 8192, 00:04:32.055 "immediate_data": true, 00:04:32.055 "allow_duplicated_isid": false, 00:04:32.055 "error_recovery_level": 0, 00:04:32.055 "nop_timeout": 60, 00:04:32.055 "nop_in_interval": 30, 00:04:32.055 "disable_chap": false, 00:04:32.055 "require_chap": false, 00:04:32.055 "mutual_chap": false, 00:04:32.055 "chap_group": 0, 00:04:32.055 "max_large_datain_per_connection": 64, 00:04:32.055 "max_r2t_per_connection": 4, 00:04:32.055 "pdu_pool_size": 36864, 00:04:32.055 "immediate_data_pool_size": 16384, 00:04:32.055 "data_out_pool_size": 2048 00:04:32.055 } 00:04:32.055 } 00:04:32.055 ] 00:04:32.055 } 00:04:32.055 ] 00:04:32.055 } 00:04:32.055 13:45:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:32.055 13:45:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59242 00:04:32.055 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59242 ']' 00:04:32.055 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59242 00:04:32.313 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:32.313 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.313 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59242 00:04:32.313 killing process with pid 59242 00:04:32.313 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.313 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.313 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59242' 00:04:32.313 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59242 00:04:32.313 13:45:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59242 00:04:34.866 13:45:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59298 00:04:34.866 13:45:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.866 13:45:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59298 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59298 ']' 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59298 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59298 00:04:40.139 killing process with pid 59298 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59298' 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59298 00:04:40.139 13:45:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59298 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:42.041 ************************************ 00:04:42.041 END TEST skip_rpc_with_json 00:04:42.041 ************************************ 00:04:42.041 00:04:42.041 real 0m11.283s 00:04:42.041 user 0m10.735s 00:04:42.041 sys 0m0.869s 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.041 13:45:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:42.041 13:45:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.041 13:45:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.041 13:45:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.041 ************************************ 00:04:42.041 START TEST skip_rpc_with_delay 00:04:42.041 ************************************ 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:42.041 13:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.300 [2024-12-11 13:45:35.103106] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:42.300 13:45:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:42.300 13:45:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.300 13:45:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:42.300 13:45:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.300 00:04:42.300 real 0m0.186s 00:04:42.300 user 0m0.089s 00:04:42.300 sys 0m0.095s 00:04:42.300 13:45:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.300 ************************************ 00:04:42.300 END TEST skip_rpc_with_delay 00:04:42.300 ************************************ 00:04:42.300 13:45:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:42.300 13:45:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:42.300 13:45:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:42.300 13:45:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:42.300 13:45:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.300 13:45:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.300 13:45:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.300 ************************************ 00:04:42.300 START TEST exit_on_failed_rpc_init 00:04:42.300 ************************************ 00:04:42.300 13:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:42.300 13:45:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59426 00:04:42.300 13:45:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.300 13:45:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59426 00:04:42.300 13:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59426 ']' 00:04:42.300 13:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.300 13:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.300 13:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.300 13:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.300 13:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.559 [2024-12-11 13:45:35.399705] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:42.559 [2024-12-11 13:45:35.400165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59426 ] 00:04:42.559 [2024-12-11 13:45:35.587211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.818 [2024-12-11 13:45:35.703203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:43.755 13:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.755 [2024-12-11 13:45:36.700492] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:43.755 [2024-12-11 13:45:36.700761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59444 ] 00:04:44.014 [2024-12-11 13:45:36.883459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.014 [2024-12-11 13:45:37.001114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.014 [2024-12-11 13:45:37.001213] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:44.014 [2024-12-11 13:45:37.001230] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:44.014 [2024-12-11 13:45:37.001249] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59426 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59426 ']' 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59426 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:44.272 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.273 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59426 00:04:44.273 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.273 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.530 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59426' 00:04:44.530 killing process with pid 59426 00:04:44.530 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59426 00:04:44.530 13:45:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59426 00:04:47.064 00:04:47.064 real 0m4.467s 00:04:47.064 user 0m4.813s 00:04:47.064 sys 0m0.662s 00:04:47.064 13:45:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.064 ************************************ 00:04:47.064 END TEST exit_on_failed_rpc_init 00:04:47.064 ************************************ 00:04:47.064 13:45:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.064 13:45:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.064 00:04:47.064 real 0m23.872s 00:04:47.064 user 0m22.789s 00:04:47.064 sys 0m2.343s 00:04:47.064 ************************************ 00:04:47.064 END TEST skip_rpc 00:04:47.064 ************************************ 00:04:47.064 13:45:39 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.064 13:45:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.064 13:45:39 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:47.064 13:45:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.064 13:45:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.064 13:45:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.064 ************************************ 00:04:47.064 START TEST rpc_client 00:04:47.064 ************************************ 00:04:47.064 13:45:39 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:47.064 * Looking for test storage... 00:04:47.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:47.064 13:45:39 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.064 13:45:39 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.064 13:45:39 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.064 13:45:40 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.064 13:45:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:47.064 13:45:40 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.064 13:45:40 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.064 --rc genhtml_branch_coverage=1 00:04:47.064 --rc genhtml_function_coverage=1 00:04:47.064 --rc genhtml_legend=1 00:04:47.064 --rc geninfo_all_blocks=1 00:04:47.064 --rc geninfo_unexecuted_blocks=1 00:04:47.064 00:04:47.064 ' 00:04:47.064 13:45:40 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.064 --rc genhtml_branch_coverage=1 00:04:47.064 --rc genhtml_function_coverage=1 00:04:47.064 --rc genhtml_legend=1 00:04:47.064 --rc geninfo_all_blocks=1 00:04:47.064 --rc geninfo_unexecuted_blocks=1 00:04:47.064 00:04:47.064 ' 00:04:47.064 13:45:40 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.064 --rc genhtml_branch_coverage=1 00:04:47.064 --rc genhtml_function_coverage=1 00:04:47.064 --rc genhtml_legend=1 00:04:47.064 --rc geninfo_all_blocks=1 00:04:47.064 --rc geninfo_unexecuted_blocks=1 00:04:47.064 00:04:47.064 ' 00:04:47.064 13:45:40 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.064 --rc genhtml_branch_coverage=1 00:04:47.064 --rc genhtml_function_coverage=1 00:04:47.064 --rc genhtml_legend=1 00:04:47.064 --rc geninfo_all_blocks=1 00:04:47.064 --rc geninfo_unexecuted_blocks=1 00:04:47.064 00:04:47.064 ' 00:04:47.064 13:45:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:47.323 OK 00:04:47.323 13:45:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:47.323 00:04:47.323 real 0m0.310s 00:04:47.323 user 0m0.174s 00:04:47.323 sys 0m0.153s 00:04:47.323 13:45:40 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.323 13:45:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:47.323 ************************************ 00:04:47.323 END TEST rpc_client 00:04:47.323 ************************************ 00:04:47.323 13:45:40 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:47.323 13:45:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.323 13:45:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.323 13:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.323 ************************************ 00:04:47.323 START TEST json_config 00:04:47.323 ************************************ 00:04:47.323 13:45:40 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:47.323 13:45:40 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.323 13:45:40 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.323 13:45:40 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.582 13:45:40 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.582 13:45:40 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.582 13:45:40 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.582 13:45:40 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.582 13:45:40 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.582 13:45:40 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.582 13:45:40 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.582 13:45:40 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.582 13:45:40 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.582 13:45:40 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.582 13:45:40 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.582 13:45:40 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.582 13:45:40 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:47.582 13:45:40 json_config -- scripts/common.sh@345 -- # : 1 00:04:47.582 13:45:40 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.582 13:45:40 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.582 13:45:40 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:47.582 13:45:40 json_config -- scripts/common.sh@353 -- # local d=1 00:04:47.582 13:45:40 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.582 13:45:40 json_config -- scripts/common.sh@355 -- # echo 1 00:04:47.582 13:45:40 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.582 13:45:40 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:47.582 13:45:40 json_config -- scripts/common.sh@353 -- # local d=2 00:04:47.582 13:45:40 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.582 13:45:40 json_config -- scripts/common.sh@355 -- # echo 2 00:04:47.582 13:45:40 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.582 13:45:40 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.582 13:45:40 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.582 13:45:40 json_config -- scripts/common.sh@368 -- # return 0 00:04:47.582 13:45:40 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.582 13:45:40 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.582 --rc genhtml_branch_coverage=1 00:04:47.582 --rc genhtml_function_coverage=1 00:04:47.582 --rc genhtml_legend=1 00:04:47.582 --rc geninfo_all_blocks=1 00:04:47.582 --rc geninfo_unexecuted_blocks=1 00:04:47.582 00:04:47.582 ' 00:04:47.582 13:45:40 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.582 --rc genhtml_branch_coverage=1 00:04:47.582 --rc genhtml_function_coverage=1 00:04:47.582 --rc genhtml_legend=1 00:04:47.582 --rc geninfo_all_blocks=1 00:04:47.582 --rc geninfo_unexecuted_blocks=1 00:04:47.582 00:04:47.582 ' 00:04:47.582 13:45:40 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.582 --rc genhtml_branch_coverage=1 00:04:47.582 --rc genhtml_function_coverage=1 00:04:47.582 --rc genhtml_legend=1 00:04:47.582 --rc geninfo_all_blocks=1 00:04:47.582 --rc geninfo_unexecuted_blocks=1 00:04:47.582 00:04:47.582 ' 00:04:47.582 13:45:40 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.582 --rc genhtml_branch_coverage=1 00:04:47.582 --rc genhtml_function_coverage=1 00:04:47.582 --rc genhtml_legend=1 00:04:47.582 --rc geninfo_all_blocks=1 00:04:47.582 --rc geninfo_unexecuted_blocks=1 00:04:47.582 00:04:47.582 ' 00:04:47.582 13:45:40 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63a1cab5-85ed-4611-9076-2b12eeaf9a9e 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=63a1cab5-85ed-4611-9076-2b12eeaf9a9e 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.582 13:45:40 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.582 13:45:40 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.582 13:45:40 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.582 13:45:40 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.582 13:45:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.582 13:45:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.582 13:45:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.582 13:45:40 json_config -- paths/export.sh@5 -- # export PATH 00:04:47.582 13:45:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@51 -- # : 0 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.582 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.582 13:45:40 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.582 13:45:40 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:47.582 13:45:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:47.582 13:45:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:47.582 13:45:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:47.582 13:45:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:47.582 13:45:40 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:47.582 WARNING: No tests are enabled so not running JSON configuration tests 00:04:47.582 13:45:40 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:47.582 00:04:47.582 real 0m0.226s 00:04:47.582 user 0m0.134s 00:04:47.582 sys 0m0.093s 00:04:47.582 13:45:40 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.582 13:45:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.583 ************************************ 00:04:47.583 END TEST json_config 00:04:47.583 ************************************ 00:04:47.583 13:45:40 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:47.583 13:45:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.583 13:45:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.583 13:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.583 ************************************ 00:04:47.583 START TEST json_config_extra_key 00:04:47.583 ************************************ 00:04:47.583 13:45:40 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:47.583 13:45:40 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.583 13:45:40 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.583 13:45:40 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.842 13:45:40 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.842 13:45:40 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:47.842 13:45:40 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.842 13:45:40 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.842 --rc genhtml_branch_coverage=1 00:04:47.842 --rc genhtml_function_coverage=1 00:04:47.842 --rc genhtml_legend=1 00:04:47.842 --rc geninfo_all_blocks=1 00:04:47.842 --rc geninfo_unexecuted_blocks=1 00:04:47.842 00:04:47.842 ' 00:04:47.842 13:45:40 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.842 --rc genhtml_branch_coverage=1 00:04:47.842 --rc genhtml_function_coverage=1 00:04:47.842 --rc genhtml_legend=1 00:04:47.843 --rc geninfo_all_blocks=1 00:04:47.843 --rc geninfo_unexecuted_blocks=1 00:04:47.843 00:04:47.843 ' 00:04:47.843 13:45:40 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.843 --rc genhtml_branch_coverage=1 00:04:47.843 --rc genhtml_function_coverage=1 00:04:47.843 --rc genhtml_legend=1 00:04:47.843 --rc geninfo_all_blocks=1 00:04:47.843 --rc geninfo_unexecuted_blocks=1 00:04:47.843 00:04:47.843 ' 00:04:47.843 13:45:40 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.843 --rc genhtml_branch_coverage=1 00:04:47.843 --rc genhtml_function_coverage=1 00:04:47.843 --rc genhtml_legend=1 00:04:47.843 --rc geninfo_all_blocks=1 00:04:47.843 --rc geninfo_unexecuted_blocks=1 00:04:47.843 00:04:47.843 ' 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63a1cab5-85ed-4611-9076-2b12eeaf9a9e 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=63a1cab5-85ed-4611-9076-2b12eeaf9a9e 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.843 13:45:40 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.843 13:45:40 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.843 13:45:40 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.843 13:45:40 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.843 13:45:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.843 13:45:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.843 13:45:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.843 13:45:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:47.843 13:45:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.843 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.843 13:45:40 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:47.843 INFO: launching applications... 00:04:47.843 13:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59654 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:47.843 Waiting for target to run... 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59654 /var/tmp/spdk_tgt.sock 00:04:47.843 13:45:40 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59654 ']' 00:04:47.843 13:45:40 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:47.843 13:45:40 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.843 13:45:40 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.843 13:45:40 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.843 13:45:40 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.843 13:45:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.102 [2024-12-11 13:45:40.901552] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:48.102 [2024-12-11 13:45:40.901746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59654 ] 00:04:48.363 [2024-12-11 13:45:41.334246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.647 [2024-12-11 13:45:41.434058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.246 00:04:49.246 INFO: shutting down applications... 00:04:49.246 13:45:42 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.246 13:45:42 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:49.246 13:45:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:49.246 13:45:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:49.246 13:45:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:49.246 13:45:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:49.246 13:45:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:49.246 13:45:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59654 ]] 00:04:49.246 13:45:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59654 00:04:49.246 13:45:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:49.246 13:45:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.246 13:45:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:04:49.246 13:45:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.813 13:45:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.813 13:45:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.813 13:45:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:04:49.813 13:45:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.381 13:45:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.381 13:45:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.381 13:45:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:04:50.381 13:45:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.950 13:45:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.950 13:45:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.950 13:45:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:04:50.950 13:45:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.209 13:45:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.209 13:45:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.209 13:45:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:04:51.209 13:45:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.777 13:45:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.777 13:45:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.777 13:45:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:04:51.777 13:45:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.344 13:45:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.344 13:45:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.344 13:45:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59654 00:04:52.344 13:45:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:52.344 13:45:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:52.344 13:45:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:52.344 SPDK target shutdown done 00:04:52.344 13:45:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:52.344 Success 00:04:52.344 13:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:52.344 ************************************ 00:04:52.344 END TEST json_config_extra_key 00:04:52.344 ************************************ 00:04:52.344 00:04:52.344 real 0m4.684s 00:04:52.344 user 0m4.112s 00:04:52.344 sys 0m0.682s 00:04:52.344 13:45:45 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.344 13:45:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.344 13:45:45 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.344 13:45:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.344 13:45:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.344 13:45:45 -- common/autotest_common.sh@10 -- # set +x 00:04:52.344 ************************************ 00:04:52.344 START TEST alias_rpc 00:04:52.344 ************************************ 00:04:52.344 13:45:45 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.603 * Looking for test storage... 00:04:52.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.603 13:45:45 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.603 --rc genhtml_branch_coverage=1 00:04:52.603 --rc genhtml_function_coverage=1 00:04:52.603 --rc genhtml_legend=1 00:04:52.603 --rc geninfo_all_blocks=1 00:04:52.603 --rc geninfo_unexecuted_blocks=1 00:04:52.603 00:04:52.603 ' 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.603 --rc genhtml_branch_coverage=1 00:04:52.603 --rc genhtml_function_coverage=1 00:04:52.603 --rc genhtml_legend=1 00:04:52.603 --rc geninfo_all_blocks=1 00:04:52.603 --rc geninfo_unexecuted_blocks=1 00:04:52.603 00:04:52.603 ' 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.603 --rc genhtml_branch_coverage=1 00:04:52.603 --rc genhtml_function_coverage=1 00:04:52.603 --rc genhtml_legend=1 00:04:52.603 --rc geninfo_all_blocks=1 00:04:52.603 --rc geninfo_unexecuted_blocks=1 00:04:52.603 00:04:52.603 ' 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.603 --rc genhtml_branch_coverage=1 00:04:52.603 --rc genhtml_function_coverage=1 00:04:52.603 --rc genhtml_legend=1 00:04:52.603 --rc geninfo_all_blocks=1 00:04:52.603 --rc geninfo_unexecuted_blocks=1 00:04:52.603 00:04:52.603 ' 00:04:52.603 13:45:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.603 13:45:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59771 00:04:52.603 13:45:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59771 00:04:52.603 13:45:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59771 ']' 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.603 13:45:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.603 [2024-12-11 13:45:45.619549] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:52.603 [2024-12-11 13:45:45.620073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59771 ] 00:04:52.862 [2024-12-11 13:45:45.803263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.122 [2024-12-11 13:45:45.913511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.058 13:45:46 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.058 13:45:46 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:54.058 13:45:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:54.058 13:45:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59771 00:04:54.058 13:45:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59771 ']' 00:04:54.058 13:45:46 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59771 00:04:54.058 13:45:46 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:54.058 13:45:47 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.058 13:45:47 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59771 00:04:54.058 killing process with pid 59771 00:04:54.058 13:45:47 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.058 13:45:47 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.058 13:45:47 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59771' 00:04:54.058 13:45:47 alias_rpc -- common/autotest_common.sh@973 -- # kill 59771 00:04:54.059 13:45:47 alias_rpc -- common/autotest_common.sh@978 -- # wait 59771 00:04:56.593 ************************************ 00:04:56.593 END TEST alias_rpc 00:04:56.593 ************************************ 00:04:56.593 00:04:56.593 real 0m4.161s 00:04:56.593 user 0m4.094s 00:04:56.593 sys 0m0.603s 00:04:56.593 13:45:49 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.593 13:45:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.593 13:45:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:56.593 13:45:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.593 13:45:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.593 13:45:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.593 13:45:49 -- common/autotest_common.sh@10 -- # set +x 00:04:56.593 ************************************ 00:04:56.593 START TEST spdkcli_tcp 00:04:56.593 ************************************ 00:04:56.593 13:45:49 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.593 * Looking for test storage... 00:04:56.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:56.852 13:45:49 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:56.852 13:45:49 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:56.852 13:45:49 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:56.852 13:45:49 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.853 13:45:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:56.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.853 --rc genhtml_branch_coverage=1 00:04:56.853 --rc genhtml_function_coverage=1 00:04:56.853 --rc genhtml_legend=1 00:04:56.853 --rc geninfo_all_blocks=1 00:04:56.853 --rc geninfo_unexecuted_blocks=1 00:04:56.853 00:04:56.853 ' 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:56.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.853 --rc genhtml_branch_coverage=1 00:04:56.853 --rc genhtml_function_coverage=1 00:04:56.853 --rc genhtml_legend=1 00:04:56.853 --rc geninfo_all_blocks=1 00:04:56.853 --rc geninfo_unexecuted_blocks=1 00:04:56.853 00:04:56.853 ' 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:56.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.853 --rc genhtml_branch_coverage=1 00:04:56.853 --rc genhtml_function_coverage=1 00:04:56.853 --rc genhtml_legend=1 00:04:56.853 --rc geninfo_all_blocks=1 00:04:56.853 --rc geninfo_unexecuted_blocks=1 00:04:56.853 00:04:56.853 ' 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:56.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.853 --rc genhtml_branch_coverage=1 00:04:56.853 --rc genhtml_function_coverage=1 00:04:56.853 --rc genhtml_legend=1 00:04:56.853 --rc geninfo_all_blocks=1 00:04:56.853 --rc geninfo_unexecuted_blocks=1 00:04:56.853 00:04:56.853 ' 00:04:56.853 13:45:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:56.853 13:45:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:56.853 13:45:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:56.853 13:45:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:56.853 13:45:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:56.853 13:45:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:56.853 13:45:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.853 13:45:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59878 00:04:56.853 13:45:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59878 00:04:56.853 13:45:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59878 ']' 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.853 13:45:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.853 [2024-12-11 13:45:49.862478] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:56.853 [2024-12-11 13:45:49.862596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59878 ] 00:04:57.112 [2024-12-11 13:45:50.043411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.371 [2024-12-11 13:45:50.162359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.371 [2024-12-11 13:45:50.162393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.310 13:45:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.310 13:45:51 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:58.310 13:45:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59895 00:04:58.310 13:45:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:58.310 13:45:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:58.310 [ 00:04:58.310 "bdev_malloc_delete", 00:04:58.310 "bdev_malloc_create", 00:04:58.310 "bdev_null_resize", 00:04:58.310 "bdev_null_delete", 00:04:58.310 "bdev_null_create", 00:04:58.310 "bdev_nvme_cuse_unregister", 00:04:58.310 "bdev_nvme_cuse_register", 00:04:58.310 "bdev_opal_new_user", 00:04:58.310 "bdev_opal_set_lock_state", 00:04:58.310 "bdev_opal_delete", 00:04:58.310 "bdev_opal_get_info", 00:04:58.310 "bdev_opal_create", 00:04:58.310 "bdev_nvme_opal_revert", 00:04:58.310 "bdev_nvme_opal_init", 00:04:58.310 "bdev_nvme_send_cmd", 00:04:58.310 "bdev_nvme_set_keys", 00:04:58.310 "bdev_nvme_get_path_iostat", 00:04:58.310 "bdev_nvme_get_mdns_discovery_info", 00:04:58.310 "bdev_nvme_stop_mdns_discovery", 00:04:58.310 "bdev_nvme_start_mdns_discovery", 00:04:58.310 "bdev_nvme_set_multipath_policy", 00:04:58.310 "bdev_nvme_set_preferred_path", 00:04:58.310 "bdev_nvme_get_io_paths", 00:04:58.310 "bdev_nvme_remove_error_injection", 00:04:58.310 "bdev_nvme_add_error_injection", 00:04:58.310 "bdev_nvme_get_discovery_info", 00:04:58.310 "bdev_nvme_stop_discovery", 00:04:58.310 "bdev_nvme_start_discovery", 00:04:58.310 "bdev_nvme_get_controller_health_info", 00:04:58.310 "bdev_nvme_disable_controller", 00:04:58.310 "bdev_nvme_enable_controller", 00:04:58.310 "bdev_nvme_reset_controller", 00:04:58.310 "bdev_nvme_get_transport_statistics", 00:04:58.310 "bdev_nvme_apply_firmware", 00:04:58.310 "bdev_nvme_detach_controller", 00:04:58.310 "bdev_nvme_get_controllers", 00:04:58.310 "bdev_nvme_attach_controller", 00:04:58.310 "bdev_nvme_set_hotplug", 00:04:58.310 "bdev_nvme_set_options", 00:04:58.310 "bdev_passthru_delete", 00:04:58.310 "bdev_passthru_create", 00:04:58.310 "bdev_lvol_set_parent_bdev", 00:04:58.310 "bdev_lvol_set_parent", 00:04:58.310 "bdev_lvol_check_shallow_copy", 00:04:58.310 "bdev_lvol_start_shallow_copy", 00:04:58.310 "bdev_lvol_grow_lvstore", 00:04:58.310 "bdev_lvol_get_lvols", 00:04:58.310 "bdev_lvol_get_lvstores", 00:04:58.310 "bdev_lvol_delete", 00:04:58.310 "bdev_lvol_set_read_only", 00:04:58.310 "bdev_lvol_resize", 00:04:58.310 "bdev_lvol_decouple_parent", 00:04:58.310 "bdev_lvol_inflate", 00:04:58.310 "bdev_lvol_rename", 00:04:58.310 "bdev_lvol_clone_bdev", 00:04:58.310 "bdev_lvol_clone", 00:04:58.310 "bdev_lvol_snapshot", 00:04:58.310 "bdev_lvol_create", 00:04:58.310 "bdev_lvol_delete_lvstore", 00:04:58.310 "bdev_lvol_rename_lvstore", 00:04:58.310 "bdev_lvol_create_lvstore", 00:04:58.310 "bdev_raid_set_options", 00:04:58.310 "bdev_raid_remove_base_bdev", 00:04:58.310 "bdev_raid_add_base_bdev", 00:04:58.310 "bdev_raid_delete", 00:04:58.310 "bdev_raid_create", 00:04:58.310 "bdev_raid_get_bdevs", 00:04:58.310 "bdev_error_inject_error", 00:04:58.310 "bdev_error_delete", 00:04:58.310 "bdev_error_create", 00:04:58.310 "bdev_split_delete", 00:04:58.310 "bdev_split_create", 00:04:58.310 "bdev_delay_delete", 00:04:58.310 "bdev_delay_create", 00:04:58.310 "bdev_delay_update_latency", 00:04:58.310 "bdev_zone_block_delete", 00:04:58.310 "bdev_zone_block_create", 00:04:58.310 "blobfs_create", 00:04:58.310 "blobfs_detect", 00:04:58.310 "blobfs_set_cache_size", 00:04:58.310 "bdev_xnvme_delete", 00:04:58.310 "bdev_xnvme_create", 00:04:58.310 "bdev_aio_delete", 00:04:58.310 "bdev_aio_rescan", 00:04:58.310 "bdev_aio_create", 00:04:58.310 "bdev_ftl_set_property", 00:04:58.310 "bdev_ftl_get_properties", 00:04:58.310 "bdev_ftl_get_stats", 00:04:58.310 "bdev_ftl_unmap", 00:04:58.310 "bdev_ftl_unload", 00:04:58.310 "bdev_ftl_delete", 00:04:58.310 "bdev_ftl_load", 00:04:58.310 "bdev_ftl_create", 00:04:58.310 "bdev_virtio_attach_controller", 00:04:58.310 "bdev_virtio_scsi_get_devices", 00:04:58.310 "bdev_virtio_detach_controller", 00:04:58.310 "bdev_virtio_blk_set_hotplug", 00:04:58.310 "bdev_iscsi_delete", 00:04:58.310 "bdev_iscsi_create", 00:04:58.310 "bdev_iscsi_set_options", 00:04:58.310 "accel_error_inject_error", 00:04:58.310 "ioat_scan_accel_module", 00:04:58.310 "dsa_scan_accel_module", 00:04:58.310 "iaa_scan_accel_module", 00:04:58.310 "keyring_file_remove_key", 00:04:58.310 "keyring_file_add_key", 00:04:58.310 "keyring_linux_set_options", 00:04:58.310 "fsdev_aio_delete", 00:04:58.310 "fsdev_aio_create", 00:04:58.310 "iscsi_get_histogram", 00:04:58.310 "iscsi_enable_histogram", 00:04:58.310 "iscsi_set_options", 00:04:58.310 "iscsi_get_auth_groups", 00:04:58.310 "iscsi_auth_group_remove_secret", 00:04:58.310 "iscsi_auth_group_add_secret", 00:04:58.310 "iscsi_delete_auth_group", 00:04:58.310 "iscsi_create_auth_group", 00:04:58.310 "iscsi_set_discovery_auth", 00:04:58.310 "iscsi_get_options", 00:04:58.310 "iscsi_target_node_request_logout", 00:04:58.310 "iscsi_target_node_set_redirect", 00:04:58.310 "iscsi_target_node_set_auth", 00:04:58.310 "iscsi_target_node_add_lun", 00:04:58.310 "iscsi_get_stats", 00:04:58.310 "iscsi_get_connections", 00:04:58.310 "iscsi_portal_group_set_auth", 00:04:58.310 "iscsi_start_portal_group", 00:04:58.310 "iscsi_delete_portal_group", 00:04:58.310 "iscsi_create_portal_group", 00:04:58.310 "iscsi_get_portal_groups", 00:04:58.310 "iscsi_delete_target_node", 00:04:58.310 "iscsi_target_node_remove_pg_ig_maps", 00:04:58.310 "iscsi_target_node_add_pg_ig_maps", 00:04:58.310 "iscsi_create_target_node", 00:04:58.310 "iscsi_get_target_nodes", 00:04:58.310 "iscsi_delete_initiator_group", 00:04:58.310 "iscsi_initiator_group_remove_initiators", 00:04:58.310 "iscsi_initiator_group_add_initiators", 00:04:58.310 "iscsi_create_initiator_group", 00:04:58.310 "iscsi_get_initiator_groups", 00:04:58.310 "nvmf_set_crdt", 00:04:58.310 "nvmf_set_config", 00:04:58.310 "nvmf_set_max_subsystems", 00:04:58.310 "nvmf_stop_mdns_prr", 00:04:58.310 "nvmf_publish_mdns_prr", 00:04:58.310 "nvmf_subsystem_get_listeners", 00:04:58.310 "nvmf_subsystem_get_qpairs", 00:04:58.310 "nvmf_subsystem_get_controllers", 00:04:58.310 "nvmf_get_stats", 00:04:58.310 "nvmf_get_transports", 00:04:58.310 "nvmf_create_transport", 00:04:58.310 "nvmf_get_targets", 00:04:58.311 "nvmf_delete_target", 00:04:58.311 "nvmf_create_target", 00:04:58.311 "nvmf_subsystem_allow_any_host", 00:04:58.311 "nvmf_subsystem_set_keys", 00:04:58.311 "nvmf_subsystem_remove_host", 00:04:58.311 "nvmf_subsystem_add_host", 00:04:58.311 "nvmf_ns_remove_host", 00:04:58.311 "nvmf_ns_add_host", 00:04:58.311 "nvmf_subsystem_remove_ns", 00:04:58.311 "nvmf_subsystem_set_ns_ana_group", 00:04:58.311 "nvmf_subsystem_add_ns", 00:04:58.311 "nvmf_subsystem_listener_set_ana_state", 00:04:58.311 "nvmf_discovery_get_referrals", 00:04:58.311 "nvmf_discovery_remove_referral", 00:04:58.311 "nvmf_discovery_add_referral", 00:04:58.311 "nvmf_subsystem_remove_listener", 00:04:58.311 "nvmf_subsystem_add_listener", 00:04:58.311 "nvmf_delete_subsystem", 00:04:58.311 "nvmf_create_subsystem", 00:04:58.311 "nvmf_get_subsystems", 00:04:58.311 "env_dpdk_get_mem_stats", 00:04:58.311 "nbd_get_disks", 00:04:58.311 "nbd_stop_disk", 00:04:58.311 "nbd_start_disk", 00:04:58.311 "ublk_recover_disk", 00:04:58.311 "ublk_get_disks", 00:04:58.311 "ublk_stop_disk", 00:04:58.311 "ublk_start_disk", 00:04:58.311 "ublk_destroy_target", 00:04:58.311 "ublk_create_target", 00:04:58.311 "virtio_blk_create_transport", 00:04:58.311 "virtio_blk_get_transports", 00:04:58.311 "vhost_controller_set_coalescing", 00:04:58.311 "vhost_get_controllers", 00:04:58.311 "vhost_delete_controller", 00:04:58.311 "vhost_create_blk_controller", 00:04:58.311 "vhost_scsi_controller_remove_target", 00:04:58.311 "vhost_scsi_controller_add_target", 00:04:58.311 "vhost_start_scsi_controller", 00:04:58.311 "vhost_create_scsi_controller", 00:04:58.311 "thread_set_cpumask", 00:04:58.311 "scheduler_set_options", 00:04:58.311 "framework_get_governor", 00:04:58.311 "framework_get_scheduler", 00:04:58.311 "framework_set_scheduler", 00:04:58.311 "framework_get_reactors", 00:04:58.311 "thread_get_io_channels", 00:04:58.311 "thread_get_pollers", 00:04:58.311 "thread_get_stats", 00:04:58.311 "framework_monitor_context_switch", 00:04:58.311 "spdk_kill_instance", 00:04:58.311 "log_enable_timestamps", 00:04:58.311 "log_get_flags", 00:04:58.311 "log_clear_flag", 00:04:58.311 "log_set_flag", 00:04:58.311 "log_get_level", 00:04:58.311 "log_set_level", 00:04:58.311 "log_get_print_level", 00:04:58.311 "log_set_print_level", 00:04:58.311 "framework_enable_cpumask_locks", 00:04:58.311 "framework_disable_cpumask_locks", 00:04:58.311 "framework_wait_init", 00:04:58.311 "framework_start_init", 00:04:58.311 "scsi_get_devices", 00:04:58.311 "bdev_get_histogram", 00:04:58.311 "bdev_enable_histogram", 00:04:58.311 "bdev_set_qos_limit", 00:04:58.311 "bdev_set_qd_sampling_period", 00:04:58.311 "bdev_get_bdevs", 00:04:58.311 "bdev_reset_iostat", 00:04:58.311 "bdev_get_iostat", 00:04:58.311 "bdev_examine", 00:04:58.311 "bdev_wait_for_examine", 00:04:58.311 "bdev_set_options", 00:04:58.311 "accel_get_stats", 00:04:58.311 "accel_set_options", 00:04:58.311 "accel_set_driver", 00:04:58.311 "accel_crypto_key_destroy", 00:04:58.311 "accel_crypto_keys_get", 00:04:58.311 "accel_crypto_key_create", 00:04:58.311 "accel_assign_opc", 00:04:58.311 "accel_get_module_info", 00:04:58.311 "accel_get_opc_assignments", 00:04:58.311 "vmd_rescan", 00:04:58.311 "vmd_remove_device", 00:04:58.311 "vmd_enable", 00:04:58.311 "sock_get_default_impl", 00:04:58.311 "sock_set_default_impl", 00:04:58.311 "sock_impl_set_options", 00:04:58.311 "sock_impl_get_options", 00:04:58.311 "iobuf_get_stats", 00:04:58.311 "iobuf_set_options", 00:04:58.311 "keyring_get_keys", 00:04:58.311 "framework_get_pci_devices", 00:04:58.311 "framework_get_config", 00:04:58.311 "framework_get_subsystems", 00:04:58.311 "fsdev_set_opts", 00:04:58.311 "fsdev_get_opts", 00:04:58.311 "trace_get_info", 00:04:58.311 "trace_get_tpoint_group_mask", 00:04:58.311 "trace_disable_tpoint_group", 00:04:58.311 "trace_enable_tpoint_group", 00:04:58.311 "trace_clear_tpoint_mask", 00:04:58.311 "trace_set_tpoint_mask", 00:04:58.311 "notify_get_notifications", 00:04:58.311 "notify_get_types", 00:04:58.311 "spdk_get_version", 00:04:58.311 "rpc_get_methods" 00:04:58.311 ] 00:04:58.311 13:45:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.311 13:45:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:58.311 13:45:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59878 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59878 ']' 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59878 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59878 00:04:58.311 killing process with pid 59878 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59878' 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59878 00:04:58.311 13:45:51 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59878 00:05:00.840 00:05:00.840 real 0m4.264s 00:05:00.840 user 0m7.543s 00:05:00.840 sys 0m0.659s 00:05:00.840 ************************************ 00:05:00.840 END TEST spdkcli_tcp 00:05:00.840 ************************************ 00:05:00.840 13:45:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.840 13:45:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.840 13:45:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.840 13:45:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.840 13:45:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.840 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:05:00.840 ************************************ 00:05:00.840 START TEST dpdk_mem_utility 00:05:00.840 ************************************ 00:05:00.840 13:45:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.100 * Looking for test storage... 00:05:01.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:01.100 13:45:53 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.100 13:45:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.100 13:45:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:01.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.100 13:45:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.100 --rc genhtml_branch_coverage=1 00:05:01.100 --rc genhtml_function_coverage=1 00:05:01.100 --rc genhtml_legend=1 00:05:01.100 --rc geninfo_all_blocks=1 00:05:01.100 --rc geninfo_unexecuted_blocks=1 00:05:01.100 00:05:01.100 ' 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.100 --rc genhtml_branch_coverage=1 00:05:01.100 --rc genhtml_function_coverage=1 00:05:01.100 --rc genhtml_legend=1 00:05:01.100 --rc geninfo_all_blocks=1 00:05:01.100 --rc geninfo_unexecuted_blocks=1 00:05:01.100 00:05:01.100 ' 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.100 --rc genhtml_branch_coverage=1 00:05:01.100 --rc genhtml_function_coverage=1 00:05:01.100 --rc genhtml_legend=1 00:05:01.100 --rc geninfo_all_blocks=1 00:05:01.100 --rc geninfo_unexecuted_blocks=1 00:05:01.100 00:05:01.100 ' 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.100 --rc genhtml_branch_coverage=1 00:05:01.100 --rc genhtml_function_coverage=1 00:05:01.100 --rc genhtml_legend=1 00:05:01.100 --rc geninfo_all_blocks=1 00:05:01.100 --rc geninfo_unexecuted_blocks=1 00:05:01.100 00:05:01.100 ' 00:05:01.100 13:45:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:01.100 13:45:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60000 00:05:01.100 13:45:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60000 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60000 ']' 00:05:01.100 13:45:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.100 13:45:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.100 [2024-12-11 13:45:54.121174] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:01.100 [2024-12-11 13:45:54.121565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60000 ] 00:05:01.360 [2024-12-11 13:45:54.303768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.619 [2024-12-11 13:45:54.412477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.593 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.593 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:02.593 13:45:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:02.593 13:45:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:02.593 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.593 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.593 { 00:05:02.593 "filename": "/tmp/spdk_mem_dump.txt" 00:05:02.593 } 00:05:02.593 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.593 13:45:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:02.593 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:02.593 1 heaps totaling size 824.000000 MiB 00:05:02.593 size: 824.000000 MiB heap id: 0 00:05:02.593 end heaps---------- 00:05:02.593 9 mempools totaling size 603.782043 MiB 00:05:02.593 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:02.593 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:02.593 size: 100.555481 MiB name: bdev_io_60000 00:05:02.593 size: 50.003479 MiB name: msgpool_60000 00:05:02.593 size: 36.509338 MiB name: fsdev_io_60000 00:05:02.593 size: 21.763794 MiB name: PDU_Pool 00:05:02.593 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:02.593 size: 4.133484 MiB name: evtpool_60000 00:05:02.593 size: 0.026123 MiB name: Session_Pool 00:05:02.593 end mempools------- 00:05:02.593 6 memzones totaling size 4.142822 MiB 00:05:02.593 size: 1.000366 MiB name: RG_ring_0_60000 00:05:02.593 size: 1.000366 MiB name: RG_ring_1_60000 00:05:02.593 size: 1.000366 MiB name: RG_ring_4_60000 00:05:02.593 size: 1.000366 MiB name: RG_ring_5_60000 00:05:02.593 size: 0.125366 MiB name: RG_ring_2_60000 00:05:02.593 size: 0.015991 MiB name: RG_ring_3_60000 00:05:02.593 end memzones------- 00:05:02.593 13:45:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:02.593 heap id: 0 total size: 824.000000 MiB number of busy elements: 324 number of free elements: 18 00:05:02.593 list of free elements. size: 16.779175 MiB 00:05:02.593 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:02.593 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:02.593 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:02.593 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:02.593 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:02.593 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:02.593 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:02.593 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:02.593 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:02.593 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:02.593 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:02.593 element at address: 0x20001b400000 with size: 0.560486 MiB 00:05:02.593 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:02.593 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:02.593 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:02.593 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:02.593 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:02.593 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:02.593 list of standard malloc elements. size: 199.289917 MiB 00:05:02.593 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:02.593 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:02.593 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:02.593 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:02.593 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:02.593 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:02.593 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:02.593 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:02.593 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:02.593 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:02.593 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:02.593 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:02.593 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:02.593 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:02.593 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:02.593 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:02.593 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:02.593 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:02.593 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:02.593 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:02.594 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:02.594 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:02.594 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:02.595 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:02.596 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:02.596 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:02.596 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:02.600 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:02.600 list of memzone associated elements. size: 607.930908 MiB 00:05:02.600 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:02.600 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:02.600 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:02.600 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:02.600 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:02.600 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60000_0 00:05:02.600 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:02.600 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60000_0 00:05:02.601 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:02.601 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60000_0 00:05:02.601 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:02.601 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:02.601 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:02.601 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:02.601 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:02.601 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60000_0 00:05:02.601 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:02.601 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60000 00:05:02.601 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:02.601 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60000 00:05:02.601 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:02.601 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:02.601 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:02.601 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:02.601 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:02.601 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:02.601 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:02.601 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:02.601 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:02.601 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60000 00:05:02.601 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:02.601 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60000 00:05:02.601 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:02.601 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60000 00:05:02.601 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:02.601 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60000 00:05:02.601 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:02.601 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60000 00:05:02.601 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:02.601 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60000 00:05:02.601 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:02.601 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:02.601 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:02.601 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:02.601 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:02.601 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:02.601 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:02.601 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60000 00:05:02.601 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:02.601 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60000 00:05:02.601 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:02.601 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:02.601 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:02.601 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:02.601 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:02.601 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60000 00:05:02.601 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:02.601 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:02.601 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:02.601 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60000 00:05:02.601 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:02.601 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60000 00:05:02.601 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:02.601 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60000 00:05:02.601 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:02.601 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:02.601 13:45:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:02.601 13:45:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60000 00:05:02.601 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60000 ']' 00:05:02.601 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60000 00:05:02.601 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:02.601 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.601 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60000 00:05:02.601 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.601 killing process with pid 60000 00:05:02.601 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.601 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60000' 00:05:02.601 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60000 00:05:02.601 13:45:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60000 00:05:05.135 00:05:05.135 real 0m3.932s 00:05:05.135 user 0m3.819s 00:05:05.135 sys 0m0.578s 00:05:05.135 13:45:57 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.135 ************************************ 00:05:05.135 END TEST dpdk_mem_utility 00:05:05.135 ************************************ 00:05:05.135 13:45:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.135 13:45:57 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:05.135 13:45:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.136 13:45:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.136 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:05:05.136 ************************************ 00:05:05.136 START TEST event 00:05:05.136 ************************************ 00:05:05.136 13:45:57 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:05.136 * Looking for test storage... 00:05:05.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:05.136 13:45:57 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.136 13:45:57 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.136 13:45:57 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.136 13:45:58 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.136 13:45:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.136 13:45:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.136 13:45:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.136 13:45:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.136 13:45:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.136 13:45:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.136 13:45:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.136 13:45:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.136 13:45:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.136 13:45:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.136 13:45:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.136 13:45:58 event -- scripts/common.sh@344 -- # case "$op" in 00:05:05.136 13:45:58 event -- scripts/common.sh@345 -- # : 1 00:05:05.136 13:45:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.136 13:45:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.136 13:45:58 event -- scripts/common.sh@365 -- # decimal 1 00:05:05.136 13:45:58 event -- scripts/common.sh@353 -- # local d=1 00:05:05.136 13:45:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.136 13:45:58 event -- scripts/common.sh@355 -- # echo 1 00:05:05.136 13:45:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.136 13:45:58 event -- scripts/common.sh@366 -- # decimal 2 00:05:05.136 13:45:58 event -- scripts/common.sh@353 -- # local d=2 00:05:05.136 13:45:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.136 13:45:58 event -- scripts/common.sh@355 -- # echo 2 00:05:05.136 13:45:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.136 13:45:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.136 13:45:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.136 13:45:58 event -- scripts/common.sh@368 -- # return 0 00:05:05.136 13:45:58 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.136 13:45:58 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.136 --rc genhtml_branch_coverage=1 00:05:05.136 --rc genhtml_function_coverage=1 00:05:05.136 --rc genhtml_legend=1 00:05:05.136 --rc geninfo_all_blocks=1 00:05:05.136 --rc geninfo_unexecuted_blocks=1 00:05:05.136 00:05:05.136 ' 00:05:05.136 13:45:58 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.136 --rc genhtml_branch_coverage=1 00:05:05.136 --rc genhtml_function_coverage=1 00:05:05.136 --rc genhtml_legend=1 00:05:05.136 --rc geninfo_all_blocks=1 00:05:05.136 --rc geninfo_unexecuted_blocks=1 00:05:05.136 00:05:05.136 ' 00:05:05.136 13:45:58 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.136 --rc genhtml_branch_coverage=1 00:05:05.136 --rc genhtml_function_coverage=1 00:05:05.136 --rc genhtml_legend=1 00:05:05.136 --rc geninfo_all_blocks=1 00:05:05.136 --rc geninfo_unexecuted_blocks=1 00:05:05.136 00:05:05.136 ' 00:05:05.136 13:45:58 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.136 --rc genhtml_branch_coverage=1 00:05:05.136 --rc genhtml_function_coverage=1 00:05:05.136 --rc genhtml_legend=1 00:05:05.136 --rc geninfo_all_blocks=1 00:05:05.136 --rc geninfo_unexecuted_blocks=1 00:05:05.136 00:05:05.136 ' 00:05:05.136 13:45:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:05.136 13:45:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:05.136 13:45:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.136 13:45:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:05.136 13:45:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.136 13:45:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.136 ************************************ 00:05:05.136 START TEST event_perf 00:05:05.136 ************************************ 00:05:05.136 13:45:58 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.136 Running I/O for 1 seconds...[2024-12-11 13:45:58.131189] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:05.136 [2024-12-11 13:45:58.131302] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60108 ] 00:05:05.395 [2024-12-11 13:45:58.314405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.395 [2024-12-11 13:45:58.431265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.395 [2024-12-11 13:45:58.431430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.395 [2024-12-11 13:45:58.431595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.395 [2024-12-11 13:45:58.431633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.773 Running I/O for 1 seconds... 00:05:06.773 lcore 0: 207959 00:05:06.773 lcore 1: 207958 00:05:06.773 lcore 2: 207959 00:05:06.773 lcore 3: 207959 00:05:06.773 done. 00:05:06.773 00:05:06.773 real 0m1.591s 00:05:06.773 user 0m4.345s 00:05:06.773 sys 0m0.125s 00:05:06.773 13:45:59 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.773 13:45:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.773 ************************************ 00:05:06.773 END TEST event_perf 00:05:06.773 ************************************ 00:05:06.773 13:45:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:06.773 13:45:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:06.773 13:45:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.773 13:45:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.773 ************************************ 00:05:06.773 START TEST event_reactor 00:05:06.773 ************************************ 00:05:06.773 13:45:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:06.773 [2024-12-11 13:45:59.800912] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:06.773 [2024-12-11 13:45:59.801019] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60148 ] 00:05:07.032 [2024-12-11 13:45:59.982143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.291 [2024-12-11 13:46:00.100867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.666 test_start 00:05:08.666 oneshot 00:05:08.666 tick 100 00:05:08.666 tick 100 00:05:08.666 tick 250 00:05:08.666 tick 100 00:05:08.666 tick 100 00:05:08.666 tick 250 00:05:08.666 tick 100 00:05:08.666 tick 500 00:05:08.666 tick 100 00:05:08.666 tick 100 00:05:08.666 tick 250 00:05:08.666 tick 100 00:05:08.666 tick 100 00:05:08.666 test_end 00:05:08.666 00:05:08.666 real 0m1.570s 00:05:08.666 user 0m1.351s 00:05:08.666 sys 0m0.111s 00:05:08.666 13:46:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.666 ************************************ 00:05:08.666 END TEST event_reactor 00:05:08.666 ************************************ 00:05:08.666 13:46:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:08.666 13:46:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.666 13:46:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:08.666 13:46:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.666 13:46:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.666 ************************************ 00:05:08.666 START TEST event_reactor_perf 00:05:08.666 ************************************ 00:05:08.666 13:46:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.666 [2024-12-11 13:46:01.444471] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:08.666 [2024-12-11 13:46:01.444576] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60185 ] 00:05:08.666 [2024-12-11 13:46:01.619903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.924 [2024-12-11 13:46:01.726184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.900 test_start 00:05:09.900 test_end 00:05:09.900 Performance: 399160 events per second 00:05:09.900 00:05:09.900 real 0m1.545s 00:05:09.900 user 0m1.330s 00:05:09.900 sys 0m0.107s 00:05:09.900 13:46:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.900 ************************************ 00:05:09.900 END TEST event_reactor_perf 00:05:09.900 13:46:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.900 ************************************ 00:05:10.159 13:46:03 event -- event/event.sh@49 -- # uname -s 00:05:10.159 13:46:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:10.159 13:46:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:10.159 13:46:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.159 13:46:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.159 13:46:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.159 ************************************ 00:05:10.159 START TEST event_scheduler 00:05:10.159 ************************************ 00:05:10.159 13:46:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:10.159 * Looking for test storage... 00:05:10.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:10.159 13:46:03 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.159 13:46:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.159 13:46:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.418 13:46:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.418 --rc genhtml_branch_coverage=1 00:05:10.418 --rc genhtml_function_coverage=1 00:05:10.418 --rc genhtml_legend=1 00:05:10.418 --rc geninfo_all_blocks=1 00:05:10.418 --rc geninfo_unexecuted_blocks=1 00:05:10.418 00:05:10.418 ' 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.418 --rc genhtml_branch_coverage=1 00:05:10.418 --rc genhtml_function_coverage=1 00:05:10.418 --rc genhtml_legend=1 00:05:10.418 --rc geninfo_all_blocks=1 00:05:10.418 --rc geninfo_unexecuted_blocks=1 00:05:10.418 00:05:10.418 ' 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.418 --rc genhtml_branch_coverage=1 00:05:10.418 --rc genhtml_function_coverage=1 00:05:10.418 --rc genhtml_legend=1 00:05:10.418 --rc geninfo_all_blocks=1 00:05:10.418 --rc geninfo_unexecuted_blocks=1 00:05:10.418 00:05:10.418 ' 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.418 --rc genhtml_branch_coverage=1 00:05:10.418 --rc genhtml_function_coverage=1 00:05:10.418 --rc genhtml_legend=1 00:05:10.418 --rc geninfo_all_blocks=1 00:05:10.418 --rc geninfo_unexecuted_blocks=1 00:05:10.418 00:05:10.418 ' 00:05:10.418 13:46:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:10.418 13:46:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60260 00:05:10.418 13:46:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:10.418 13:46:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.418 13:46:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60260 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60260 ']' 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.418 13:46:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.418 [2024-12-11 13:46:03.338963] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:10.418 [2024-12-11 13:46:03.339259] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60260 ] 00:05:10.677 [2024-12-11 13:46:03.521092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.677 [2024-12-11 13:46:03.635948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.677 [2024-12-11 13:46:03.636124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.677 [2024-12-11 13:46:03.636265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.677 [2024-12-11 13:46:03.636300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.245 13:46:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.245 13:46:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:11.245 13:46:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:11.245 13:46:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.245 13:46:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.245 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.245 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.245 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.245 POWER: Cannot set governor of lcore 0 to performance 00:05:11.245 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.245 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.245 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.245 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.245 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:11.245 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:11.245 POWER: Unable to set Power Management Environment for lcore 0 00:05:11.245 [2024-12-11 13:46:04.186260] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:11.245 [2024-12-11 13:46:04.186330] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:11.245 [2024-12-11 13:46:04.186482] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:11.245 [2024-12-11 13:46:04.186583] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:11.245 [2024-12-11 13:46:04.186637] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:11.245 [2024-12-11 13:46:04.186687] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:11.245 13:46:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.245 13:46:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:11.245 13:46:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.245 13:46:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.504 [2024-12-11 13:46:04.502323] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:11.504 13:46:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.504 13:46:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:11.504 13:46:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.504 13:46:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.504 13:46:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.504 ************************************ 00:05:11.504 START TEST scheduler_create_thread 00:05:11.504 ************************************ 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.504 2 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.504 3 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.504 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.763 4 00:05:11.763 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.763 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.764 5 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.764 6 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.764 7 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.764 8 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.764 9 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.764 10 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.764 13:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.699 13:46:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.699 13:46:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.699 13:46:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.699 13:46:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.076 13:46:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.076 13:46:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.076 13:46:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.076 13:46:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.076 13:46:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.012 13:46:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.012 00:05:15.012 real 0m3.377s 00:05:15.012 user 0m0.029s 00:05:15.012 sys 0m0.004s 00:05:15.012 ************************************ 00:05:15.012 END TEST scheduler_create_thread 00:05:15.012 ************************************ 00:05:15.012 13:46:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.012 13:46:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.012 13:46:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:15.012 13:46:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60260 00:05:15.012 13:46:07 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60260 ']' 00:05:15.012 13:46:07 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60260 00:05:15.012 13:46:07 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:15.012 13:46:07 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.012 13:46:07 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60260 00:05:15.012 killing process with pid 60260 00:05:15.012 13:46:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:15.012 13:46:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:15.012 13:46:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60260' 00:05:15.012 13:46:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60260 00:05:15.012 13:46:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60260 00:05:15.271 [2024-12-11 13:46:08.274879] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:16.648 00:05:16.648 real 0m6.440s 00:05:16.648 user 0m12.922s 00:05:16.648 sys 0m0.548s 00:05:16.648 13:46:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.648 ************************************ 00:05:16.648 END TEST event_scheduler 00:05:16.648 ************************************ 00:05:16.648 13:46:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.648 13:46:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:16.648 13:46:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:16.648 13:46:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.648 13:46:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.648 13:46:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.648 ************************************ 00:05:16.648 START TEST app_repeat 00:05:16.648 ************************************ 00:05:16.648 13:46:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:16.648 13:46:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.648 13:46:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.648 13:46:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60376 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.649 Process app_repeat pid: 60376 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60376' 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.649 spdk_app_start Round 0 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:16.649 13:46:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60376 /var/tmp/spdk-nbd.sock 00:05:16.649 13:46:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60376 ']' 00:05:16.649 13:46:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.649 13:46:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.649 13:46:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.649 13:46:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.649 13:46:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.649 [2024-12-11 13:46:09.614739] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:16.649 [2024-12-11 13:46:09.614877] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60376 ] 00:05:16.908 [2024-12-11 13:46:09.797543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.908 [2024-12-11 13:46:09.912774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.908 [2024-12-11 13:46:09.912804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.502 13:46:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.502 13:46:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.502 13:46:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.761 Malloc0 00:05:17.761 13:46:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.020 Malloc1 00:05:18.020 13:46:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.020 13:46:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.279 /dev/nbd0 00:05:18.279 13:46:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.279 13:46:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.279 1+0 records in 00:05:18.279 1+0 records out 00:05:18.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265745 s, 15.4 MB/s 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.279 13:46:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.279 13:46:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.279 13:46:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.279 13:46:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.538 /dev/nbd1 00:05:18.538 13:46:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.538 13:46:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.538 1+0 records in 00:05:18.538 1+0 records out 00:05:18.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314661 s, 13.0 MB/s 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.538 13:46:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.538 13:46:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.538 13:46:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.538 13:46:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.538 13:46:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.538 13:46:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.798 { 00:05:18.798 "nbd_device": "/dev/nbd0", 00:05:18.798 "bdev_name": "Malloc0" 00:05:18.798 }, 00:05:18.798 { 00:05:18.798 "nbd_device": "/dev/nbd1", 00:05:18.798 "bdev_name": "Malloc1" 00:05:18.798 } 00:05:18.798 ]' 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.798 { 00:05:18.798 "nbd_device": "/dev/nbd0", 00:05:18.798 "bdev_name": "Malloc0" 00:05:18.798 }, 00:05:18.798 { 00:05:18.798 "nbd_device": "/dev/nbd1", 00:05:18.798 "bdev_name": "Malloc1" 00:05:18.798 } 00:05:18.798 ]' 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.798 /dev/nbd1' 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.798 /dev/nbd1' 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.798 256+0 records in 00:05:18.798 256+0 records out 00:05:18.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513765 s, 204 MB/s 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.798 256+0 records in 00:05:18.798 256+0 records out 00:05:18.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297639 s, 35.2 MB/s 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.798 256+0 records in 00:05:18.798 256+0 records out 00:05:18.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328776 s, 31.9 MB/s 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.798 13:46:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.056 13:46:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.056 13:46:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.056 13:46:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.056 13:46:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.056 13:46:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.056 13:46:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.056 13:46:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.056 13:46:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.056 13:46:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.056 13:46:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.056 13:46:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.056 13:46:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.056 13:46:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.056 13:46:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.056 13:46:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.056 13:46:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.056 13:46:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.056 13:46:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.056 13:46:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.056 13:46:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.313 13:46:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.571 13:46:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.571 13:46:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.139 13:46:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.516 [2024-12-11 13:46:14.150190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.516 [2024-12-11 13:46:14.264326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.516 [2024-12-11 13:46:14.264328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.516 [2024-12-11 13:46:14.462469] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.516 [2024-12-11 13:46:14.462564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.421 spdk_app_start Round 1 00:05:23.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.421 13:46:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.421 13:46:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:23.421 13:46:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60376 /var/tmp/spdk-nbd.sock 00:05:23.421 13:46:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60376 ']' 00:05:23.421 13:46:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.421 13:46:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.422 13:46:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.422 13:46:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.422 13:46:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.422 13:46:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.422 13:46:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:23.422 13:46:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.422 Malloc0 00:05:23.422 13:46:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.709 Malloc1 00:05:23.709 13:46:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.709 13:46:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.975 /dev/nbd0 00:05:23.975 13:46:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.975 13:46:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.975 13:46:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.975 13:46:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.975 13:46:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.975 13:46:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.976 13:46:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.976 13:46:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.976 13:46:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.976 13:46:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.976 13:46:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.976 1+0 records in 00:05:23.976 1+0 records out 00:05:23.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321454 s, 12.7 MB/s 00:05:23.976 13:46:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.976 13:46:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.976 13:46:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.976 13:46:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.976 13:46:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.976 13:46:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.976 13:46:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.976 13:46:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.235 /dev/nbd1 00:05:24.235 13:46:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.235 13:46:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.235 1+0 records in 00:05:24.235 1+0 records out 00:05:24.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326868 s, 12.5 MB/s 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:24.235 13:46:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:24.235 13:46:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.235 13:46:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.235 13:46:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.235 13:46:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.235 13:46:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.494 { 00:05:24.494 "nbd_device": "/dev/nbd0", 00:05:24.494 "bdev_name": "Malloc0" 00:05:24.494 }, 00:05:24.494 { 00:05:24.494 "nbd_device": "/dev/nbd1", 00:05:24.494 "bdev_name": "Malloc1" 00:05:24.494 } 00:05:24.494 ]' 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.494 { 00:05:24.494 "nbd_device": "/dev/nbd0", 00:05:24.494 "bdev_name": "Malloc0" 00:05:24.494 }, 00:05:24.494 { 00:05:24.494 "nbd_device": "/dev/nbd1", 00:05:24.494 "bdev_name": "Malloc1" 00:05:24.494 } 00:05:24.494 ]' 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.494 /dev/nbd1' 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.494 /dev/nbd1' 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.494 13:46:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.495 13:46:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.495 13:46:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.754 256+0 records in 00:05:24.754 256+0 records out 00:05:24.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112635 s, 93.1 MB/s 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.754 256+0 records in 00:05:24.754 256+0 records out 00:05:24.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287491 s, 36.5 MB/s 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.754 256+0 records in 00:05:24.754 256+0 records out 00:05:24.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312509 s, 33.6 MB/s 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.754 13:46:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.014 13:46:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.014 13:46:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.014 13:46:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.014 13:46:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.014 13:46:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.014 13:46:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.014 13:46:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.014 13:46:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.014 13:46:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.014 13:46:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.273 13:46:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.532 13:46:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.532 13:46:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.532 13:46:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.532 13:46:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.533 13:46:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.533 13:46:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.533 13:46:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.533 13:46:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.533 13:46:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.533 13:46:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.792 13:46:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.172 [2024-12-11 13:46:19.933664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.172 [2024-12-11 13:46:20.046680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.172 [2024-12-11 13:46:20.046698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.432 [2024-12-11 13:46:20.249228] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.432 [2024-12-11 13:46:20.249517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.811 spdk_app_start Round 2 00:05:28.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.811 13:46:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.811 13:46:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:28.811 13:46:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60376 /var/tmp/spdk-nbd.sock 00:05:28.811 13:46:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60376 ']' 00:05:28.811 13:46:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.811 13:46:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.811 13:46:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.811 13:46:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.811 13:46:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.071 13:46:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.071 13:46:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:29.071 13:46:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.330 Malloc0 00:05:29.330 13:46:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.590 Malloc1 00:05:29.590 13:46:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.590 13:46:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.849 /dev/nbd0 00:05:29.849 13:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.849 13:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.849 1+0 records in 00:05:29.849 1+0 records out 00:05:29.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267104 s, 15.3 MB/s 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.849 13:46:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.849 13:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.849 13:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.849 13:46:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.109 /dev/nbd1 00:05:30.109 13:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.109 13:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.109 13:46:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:30.109 13:46:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.109 13:46:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.109 13:46:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.109 13:46:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:30.109 13:46:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.109 13:46:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.110 13:46:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.110 13:46:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.110 1+0 records in 00:05:30.110 1+0 records out 00:05:30.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265469 s, 15.4 MB/s 00:05:30.110 13:46:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.110 13:46:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.110 13:46:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.110 13:46:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.110 13:46:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.110 13:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.110 13:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.110 13:46:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.110 13:46:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.110 13:46:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.370 { 00:05:30.370 "nbd_device": "/dev/nbd0", 00:05:30.370 "bdev_name": "Malloc0" 00:05:30.370 }, 00:05:30.370 { 00:05:30.370 "nbd_device": "/dev/nbd1", 00:05:30.370 "bdev_name": "Malloc1" 00:05:30.370 } 00:05:30.370 ]' 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.370 { 00:05:30.370 "nbd_device": "/dev/nbd0", 00:05:30.370 "bdev_name": "Malloc0" 00:05:30.370 }, 00:05:30.370 { 00:05:30.370 "nbd_device": "/dev/nbd1", 00:05:30.370 "bdev_name": "Malloc1" 00:05:30.370 } 00:05:30.370 ]' 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.370 /dev/nbd1' 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.370 /dev/nbd1' 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.370 256+0 records in 00:05:30.370 256+0 records out 00:05:30.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00705174 s, 149 MB/s 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.370 256+0 records in 00:05:30.370 256+0 records out 00:05:30.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279322 s, 37.5 MB/s 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.370 256+0 records in 00:05:30.370 256+0 records out 00:05:30.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307869 s, 34.1 MB/s 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.370 13:46:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.630 13:46:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.630 13:46:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.630 13:46:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.630 13:46:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.630 13:46:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.630 13:46:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.630 13:46:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.630 13:46:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.630 13:46:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.630 13:46:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.890 13:46:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.150 13:46:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.150 13:46:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.758 13:46:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.696 [2024-12-11 13:46:25.695697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.955 [2024-12-11 13:46:25.804376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.955 [2024-12-11 13:46:25.804377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.955 [2024-12-11 13:46:25.995923] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.955 [2024-12-11 13:46:25.996005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.861 13:46:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60376 /var/tmp/spdk-nbd.sock 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60376 ']' 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:34.861 13:46:27 event.app_repeat -- event/event.sh@39 -- # killprocess 60376 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60376 ']' 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60376 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60376 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.861 killing process with pid 60376 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60376' 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60376 00:05:34.861 13:46:27 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60376 00:05:35.798 spdk_app_start is called in Round 0. 00:05:35.798 Shutdown signal received, stop current app iteration 00:05:35.798 Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 reinitialization... 00:05:35.798 spdk_app_start is called in Round 1. 00:05:35.798 Shutdown signal received, stop current app iteration 00:05:35.798 Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 reinitialization... 00:05:35.798 spdk_app_start is called in Round 2. 00:05:35.798 Shutdown signal received, stop current app iteration 00:05:35.798 Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 reinitialization... 00:05:35.798 spdk_app_start is called in Round 3. 00:05:35.798 Shutdown signal received, stop current app iteration 00:05:35.798 13:46:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:35.798 13:46:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:35.798 00:05:35.798 real 0m19.293s 00:05:35.798 user 0m40.931s 00:05:35.798 sys 0m3.100s 00:05:35.798 13:46:28 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.798 ************************************ 00:05:35.798 END TEST app_repeat 00:05:35.798 ************************************ 00:05:35.798 13:46:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.057 13:46:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:36.057 13:46:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:36.057 13:46:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.057 13:46:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.057 13:46:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.057 ************************************ 00:05:36.057 START TEST cpu_locks 00:05:36.057 ************************************ 00:05:36.057 13:46:28 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:36.057 * Looking for test storage... 00:05:36.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:36.057 13:46:29 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.057 13:46:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.057 13:46:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.317 13:46:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.317 13:46:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:36.317 13:46:29 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.317 13:46:29 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.317 --rc genhtml_branch_coverage=1 00:05:36.317 --rc genhtml_function_coverage=1 00:05:36.317 --rc genhtml_legend=1 00:05:36.317 --rc geninfo_all_blocks=1 00:05:36.317 --rc geninfo_unexecuted_blocks=1 00:05:36.317 00:05:36.317 ' 00:05:36.317 13:46:29 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.317 --rc genhtml_branch_coverage=1 00:05:36.317 --rc genhtml_function_coverage=1 00:05:36.317 --rc genhtml_legend=1 00:05:36.317 --rc geninfo_all_blocks=1 00:05:36.317 --rc geninfo_unexecuted_blocks=1 00:05:36.317 00:05:36.317 ' 00:05:36.317 13:46:29 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.317 --rc genhtml_branch_coverage=1 00:05:36.317 --rc genhtml_function_coverage=1 00:05:36.317 --rc genhtml_legend=1 00:05:36.317 --rc geninfo_all_blocks=1 00:05:36.317 --rc geninfo_unexecuted_blocks=1 00:05:36.317 00:05:36.317 ' 00:05:36.317 13:46:29 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.317 --rc genhtml_branch_coverage=1 00:05:36.317 --rc genhtml_function_coverage=1 00:05:36.317 --rc genhtml_legend=1 00:05:36.317 --rc geninfo_all_blocks=1 00:05:36.317 --rc geninfo_unexecuted_blocks=1 00:05:36.317 00:05:36.317 ' 00:05:36.317 13:46:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:36.317 13:46:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:36.317 13:46:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:36.317 13:46:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:36.317 13:46:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.317 13:46:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.317 13:46:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.317 ************************************ 00:05:36.317 START TEST default_locks 00:05:36.317 ************************************ 00:05:36.317 13:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:36.317 13:46:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60819 00:05:36.317 13:46:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.317 13:46:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60819 00:05:36.317 13:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60819 ']' 00:05:36.317 13:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.317 13:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.317 13:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.317 13:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.317 13:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.317 [2024-12-11 13:46:29.258985] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:36.317 [2024-12-11 13:46:29.259106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60819 ] 00:05:36.576 [2024-12-11 13:46:29.441344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.576 [2024-12-11 13:46:29.556374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.513 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.513 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:37.513 13:46:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60819 00:05:37.513 13:46:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60819 00:05:37.513 13:46:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60819 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60819 ']' 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60819 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60819 00:05:38.081 killing process with pid 60819 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60819' 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60819 00:05:38.081 13:46:30 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60819 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60819 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60819 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60819 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60819 ']' 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.618 ERROR: process (pid: 60819) is no longer running 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.618 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60819) - No such process 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:40.618 ************************************ 00:05:40.618 END TEST default_locks 00:05:40.618 ************************************ 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:40.618 00:05:40.618 real 0m4.203s 00:05:40.618 user 0m4.151s 00:05:40.618 sys 0m0.690s 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.618 13:46:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.618 13:46:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:40.618 13:46:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.618 13:46:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.618 13:46:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.618 ************************************ 00:05:40.618 START TEST default_locks_via_rpc 00:05:40.618 ************************************ 00:05:40.618 13:46:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:40.618 13:46:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60900 00:05:40.618 13:46:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.618 13:46:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60900 00:05:40.618 13:46:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60900 ']' 00:05:40.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.618 13:46:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.618 13:46:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.618 13:46:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.618 13:46:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.618 13:46:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.618 [2024-12-11 13:46:33.536768] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:40.618 [2024-12-11 13:46:33.536920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60900 ] 00:05:40.880 [2024-12-11 13:46:33.719936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.880 [2024-12-11 13:46:33.838909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.853 13:46:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.853 13:46:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.853 13:46:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:41.853 13:46:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.853 13:46:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.853 13:46:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60900 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60900 00:05:41.854 13:46:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60900 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60900 ']' 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60900 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60900 00:05:42.421 killing process with pid 60900 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60900' 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60900 00:05:42.421 13:46:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60900 00:05:44.957 00:05:44.957 real 0m4.224s 00:05:44.957 user 0m4.155s 00:05:44.957 sys 0m0.718s 00:05:44.957 13:46:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.957 ************************************ 00:05:44.957 13:46:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.957 END TEST default_locks_via_rpc 00:05:44.957 ************************************ 00:05:44.957 13:46:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:44.957 13:46:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.957 13:46:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.957 13:46:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.957 ************************************ 00:05:44.957 START TEST non_locking_app_on_locked_coremask 00:05:44.957 ************************************ 00:05:44.957 13:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:44.957 13:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60976 00:05:44.957 13:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.957 13:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60976 /var/tmp/spdk.sock 00:05:44.957 13:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60976 ']' 00:05:44.957 13:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.957 13:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.957 13:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.957 13:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.957 13:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.957 [2024-12-11 13:46:37.839391] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:44.957 [2024-12-11 13:46:37.839520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60976 ] 00:05:45.216 [2024-12-11 13:46:38.020475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.216 [2024-12-11 13:46:38.136330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60992 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60992 /var/tmp/spdk2.sock 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60992 ']' 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.154 13:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.154 [2024-12-11 13:46:39.110409] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:46.154 [2024-12-11 13:46:39.110741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60992 ] 00:05:46.413 [2024-12-11 13:46:39.295986] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.413 [2024-12-11 13:46:39.296042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.672 [2024-12-11 13:46:39.519313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.208 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.208 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.208 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60976 00:05:49.208 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60976 00:05:49.208 13:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60976 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60976 ']' 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60976 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60976 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.777 killing process with pid 60976 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60976' 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60976 00:05:49.777 13:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60976 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60992 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60992 ']' 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60992 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60992 00:05:55.049 killing process with pid 60992 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60992' 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60992 00:05:55.049 13:46:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60992 00:05:56.954 00:05:56.954 real 0m12.022s 00:05:56.954 user 0m12.263s 00:05:56.954 sys 0m1.477s 00:05:56.954 13:46:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.954 ************************************ 00:05:56.954 END TEST non_locking_app_on_locked_coremask 00:05:56.954 ************************************ 00:05:56.954 13:46:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.954 13:46:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:56.954 13:46:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.954 13:46:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.954 13:46:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.954 ************************************ 00:05:56.954 START TEST locking_app_on_unlocked_coremask 00:05:56.954 ************************************ 00:05:56.954 13:46:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:56.954 13:46:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61146 00:05:56.954 13:46:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:56.954 13:46:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61146 /var/tmp/spdk.sock 00:05:56.954 13:46:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61146 ']' 00:05:56.954 13:46:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.954 13:46:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.954 13:46:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.954 13:46:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.954 13:46:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.954 [2024-12-11 13:46:49.929788] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:56.954 [2024-12-11 13:46:49.929928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61146 ] 00:05:57.213 [2024-12-11 13:46:50.110136] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.213 [2024-12-11 13:46:50.110375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.213 [2024-12-11 13:46:50.222137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61168 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61168 /var/tmp/spdk2.sock 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61168 ']' 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.151 13:46:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.151 [2024-12-11 13:46:51.183061] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:58.151 [2024-12-11 13:46:51.183401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61168 ] 00:05:58.410 [2024-12-11 13:46:51.366010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.669 [2024-12-11 13:46:51.590419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.202 13:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.202 13:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.202 13:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61168 00:06:01.202 13:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61168 00:06:01.202 13:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61146 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61146 ']' 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61146 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61146 00:06:01.783 killing process with pid 61146 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61146' 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61146 00:06:01.783 13:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61146 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61168 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61168 ']' 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61168 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61168 00:06:07.094 killing process with pid 61168 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61168' 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61168 00:06:07.094 13:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61168 00:06:09.000 ************************************ 00:06:09.000 END TEST locking_app_on_unlocked_coremask 00:06:09.000 ************************************ 00:06:09.000 00:06:09.000 real 0m12.065s 00:06:09.000 user 0m12.336s 00:06:09.000 sys 0m1.493s 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.000 13:47:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:09.000 13:47:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.000 13:47:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.000 13:47:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.000 ************************************ 00:06:09.000 START TEST locking_app_on_locked_coremask 00:06:09.000 ************************************ 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61320 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61320 /var/tmp/spdk.sock 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61320 ']' 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.000 13:47:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.260 [2024-12-11 13:47:02.067233] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:09.260 [2024-12-11 13:47:02.067358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61320 ] 00:06:09.260 [2024-12-11 13:47:02.237890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.519 [2024-12-11 13:47:02.352641] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61336 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61336 /var/tmp/spdk2.sock 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61336 /var/tmp/spdk2.sock 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61336 /var/tmp/spdk2.sock 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61336 ']' 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.457 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.457 [2024-12-11 13:47:03.304943] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:10.457 [2024-12-11 13:47:03.305062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61336 ] 00:06:10.457 [2024-12-11 13:47:03.488490] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61320 has claimed it. 00:06:10.457 [2024-12-11 13:47:03.488567] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:11.025 ERROR: process (pid: 61336) is no longer running 00:06:11.025 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61336) - No such process 00:06:11.025 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.025 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:11.025 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:11.025 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.025 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.025 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.025 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61320 00:06:11.025 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61320 00:06:11.025 13:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61320 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61320 ']' 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61320 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61320 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.593 killing process with pid 61320 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61320' 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61320 00:06:11.593 13:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61320 00:06:14.125 ************************************ 00:06:14.125 END TEST locking_app_on_locked_coremask 00:06:14.125 00:06:14.125 real 0m4.925s 00:06:14.125 user 0m5.094s 00:06:14.125 sys 0m0.836s 00:06:14.125 13:47:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.125 13:47:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.125 ************************************ 00:06:14.125 13:47:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:14.125 13:47:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.125 13:47:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.125 13:47:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.125 ************************************ 00:06:14.125 START TEST locking_overlapped_coremask 00:06:14.125 ************************************ 00:06:14.125 13:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:14.125 13:47:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61411 00:06:14.125 13:47:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:14.125 13:47:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61411 /var/tmp/spdk.sock 00:06:14.125 13:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61411 ']' 00:06:14.125 13:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.125 13:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.125 13:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.125 13:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.125 13:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.125 [2024-12-11 13:47:07.061112] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:14.125 [2024-12-11 13:47:07.061239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61411 ] 00:06:14.384 [2024-12-11 13:47:07.242663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.384 [2024-12-11 13:47:07.367583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.384 [2024-12-11 13:47:07.367727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.384 [2024-12-11 13:47:07.367761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61429 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61429 /var/tmp/spdk2.sock 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61429 /var/tmp/spdk2.sock 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61429 /var/tmp/spdk2.sock 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61429 ']' 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.320 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.320 [2024-12-11 13:47:08.363994] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:15.580 [2024-12-11 13:47:08.364950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61429 ] 00:06:15.580 [2024-12-11 13:47:08.548356] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61411 has claimed it. 00:06:15.580 [2024-12-11 13:47:08.548413] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.148 ERROR: process (pid: 61429) is no longer running 00:06:16.148 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61429) - No such process 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61411 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61411 ']' 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61411 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.148 13:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61411 00:06:16.148 13:47:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.148 13:47:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.148 13:47:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61411' 00:06:16.148 killing process with pid 61411 00:06:16.148 13:47:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61411 00:06:16.148 13:47:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61411 00:06:18.682 00:06:18.682 real 0m4.486s 00:06:18.682 user 0m12.061s 00:06:18.682 sys 0m0.651s 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.682 ************************************ 00:06:18.682 END TEST locking_overlapped_coremask 00:06:18.682 ************************************ 00:06:18.682 13:47:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:18.682 13:47:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.682 13:47:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.682 13:47:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.682 ************************************ 00:06:18.682 START TEST locking_overlapped_coremask_via_rpc 00:06:18.682 ************************************ 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61494 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61494 /var/tmp/spdk.sock 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61494 ']' 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.682 13:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.682 [2024-12-11 13:47:11.626010] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:18.682 [2024-12-11 13:47:11.626386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61494 ] 00:06:18.940 [2024-12-11 13:47:11.805063] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.940 [2024-12-11 13:47:11.805114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.940 [2024-12-11 13:47:11.918377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.940 [2024-12-11 13:47:11.918529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.940 [2024-12-11 13:47:11.918558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61517 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61517 /var/tmp/spdk2.sock 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61517 ']' 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.876 13:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.135 [2024-12-11 13:47:12.921957] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:20.135 [2024-12-11 13:47:12.922491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61517 ] 00:06:20.135 [2024-12-11 13:47:13.109901] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.135 [2024-12-11 13:47:13.109953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.393 [2024-12-11 13:47:13.357602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.393 [2024-12-11 13:47:13.361007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.393 [2024-12-11 13:47:13.361061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.930 [2024-12-11 13:47:15.496018] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61494 has claimed it. 00:06:22.930 request: 00:06:22.930 { 00:06:22.930 "method": "framework_enable_cpumask_locks", 00:06:22.930 "req_id": 1 00:06:22.930 } 00:06:22.930 Got JSON-RPC error response 00:06:22.930 response: 00:06:22.930 { 00:06:22.930 "code": -32603, 00:06:22.930 "message": "Failed to claim CPU core: 2" 00:06:22.930 } 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61494 /var/tmp/spdk.sock 00:06:22.930 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61494 ']' 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61517 /var/tmp/spdk2.sock 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61517 ']' 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:22.931 ************************************ 00:06:22.931 END TEST locking_overlapped_coremask_via_rpc 00:06:22.931 ************************************ 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.931 00:06:22.931 real 0m4.413s 00:06:22.931 user 0m1.236s 00:06:22.931 sys 0m0.242s 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.931 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.189 13:47:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:23.189 13:47:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61494 ]] 00:06:23.189 13:47:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61494 00:06:23.189 13:47:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61494 ']' 00:06:23.189 13:47:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61494 00:06:23.189 13:47:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:23.189 13:47:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.189 13:47:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61494 00:06:23.189 killing process with pid 61494 00:06:23.189 13:47:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.189 13:47:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.189 13:47:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61494' 00:06:23.189 13:47:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61494 00:06:23.189 13:47:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61494 00:06:25.719 13:47:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61517 ]] 00:06:25.719 13:47:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61517 00:06:25.719 13:47:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61517 ']' 00:06:25.719 13:47:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61517 00:06:25.719 13:47:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:25.719 13:47:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.719 13:47:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61517 00:06:25.719 killing process with pid 61517 00:06:25.719 13:47:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:25.719 13:47:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:25.719 13:47:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61517' 00:06:25.719 13:47:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61517 00:06:25.719 13:47:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61517 00:06:28.252 13:47:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.252 13:47:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:28.252 13:47:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61494 ]] 00:06:28.252 13:47:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61494 00:06:28.252 13:47:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61494 ']' 00:06:28.252 13:47:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61494 00:06:28.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61494) - No such process 00:06:28.252 Process with pid 61494 is not found 00:06:28.252 Process with pid 61517 is not found 00:06:28.252 13:47:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61494 is not found' 00:06:28.252 13:47:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61517 ]] 00:06:28.252 13:47:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61517 00:06:28.252 13:47:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61517 ']' 00:06:28.252 13:47:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61517 00:06:28.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61517) - No such process 00:06:28.252 13:47:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61517 is not found' 00:06:28.252 13:47:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.252 00:06:28.252 real 0m52.074s 00:06:28.252 user 1m27.706s 00:06:28.252 sys 0m7.382s 00:06:28.252 13:47:20 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.252 13:47:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.252 ************************************ 00:06:28.252 END TEST cpu_locks 00:06:28.252 ************************************ 00:06:28.252 ************************************ 00:06:28.252 END TEST event 00:06:28.252 ************************************ 00:06:28.252 00:06:28.252 real 1m23.200s 00:06:28.252 user 2m28.832s 00:06:28.252 sys 0m11.790s 00:06:28.252 13:47:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.252 13:47:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.252 13:47:21 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:28.252 13:47:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.252 13:47:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.252 13:47:21 -- common/autotest_common.sh@10 -- # set +x 00:06:28.252 ************************************ 00:06:28.252 START TEST thread 00:06:28.252 ************************************ 00:06:28.252 13:47:21 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:28.252 * Looking for test storage... 00:06:28.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:28.252 13:47:21 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.252 13:47:21 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.252 13:47:21 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.511 13:47:21 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.511 13:47:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.511 13:47:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.511 13:47:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.511 13:47:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.511 13:47:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.511 13:47:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.511 13:47:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.511 13:47:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.511 13:47:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.511 13:47:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.511 13:47:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.511 13:47:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:28.511 13:47:21 thread -- scripts/common.sh@345 -- # : 1 00:06:28.511 13:47:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.511 13:47:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.511 13:47:21 thread -- scripts/common.sh@365 -- # decimal 1 00:06:28.511 13:47:21 thread -- scripts/common.sh@353 -- # local d=1 00:06:28.511 13:47:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.511 13:47:21 thread -- scripts/common.sh@355 -- # echo 1 00:06:28.511 13:47:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.511 13:47:21 thread -- scripts/common.sh@366 -- # decimal 2 00:06:28.511 13:47:21 thread -- scripts/common.sh@353 -- # local d=2 00:06:28.511 13:47:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.511 13:47:21 thread -- scripts/common.sh@355 -- # echo 2 00:06:28.511 13:47:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.511 13:47:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.511 13:47:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.511 13:47:21 thread -- scripts/common.sh@368 -- # return 0 00:06:28.511 13:47:21 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.511 13:47:21 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.511 --rc genhtml_branch_coverage=1 00:06:28.511 --rc genhtml_function_coverage=1 00:06:28.511 --rc genhtml_legend=1 00:06:28.511 --rc geninfo_all_blocks=1 00:06:28.511 --rc geninfo_unexecuted_blocks=1 00:06:28.511 00:06:28.511 ' 00:06:28.511 13:47:21 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.511 --rc genhtml_branch_coverage=1 00:06:28.511 --rc genhtml_function_coverage=1 00:06:28.511 --rc genhtml_legend=1 00:06:28.511 --rc geninfo_all_blocks=1 00:06:28.511 --rc geninfo_unexecuted_blocks=1 00:06:28.511 00:06:28.511 ' 00:06:28.511 13:47:21 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.511 --rc genhtml_branch_coverage=1 00:06:28.511 --rc genhtml_function_coverage=1 00:06:28.511 --rc genhtml_legend=1 00:06:28.511 --rc geninfo_all_blocks=1 00:06:28.511 --rc geninfo_unexecuted_blocks=1 00:06:28.511 00:06:28.511 ' 00:06:28.511 13:47:21 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.511 --rc genhtml_branch_coverage=1 00:06:28.511 --rc genhtml_function_coverage=1 00:06:28.511 --rc genhtml_legend=1 00:06:28.511 --rc geninfo_all_blocks=1 00:06:28.511 --rc geninfo_unexecuted_blocks=1 00:06:28.511 00:06:28.511 ' 00:06:28.511 13:47:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.511 13:47:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:28.511 13:47:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.511 13:47:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.511 ************************************ 00:06:28.511 START TEST thread_poller_perf 00:06:28.511 ************************************ 00:06:28.511 13:47:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.511 [2024-12-11 13:47:21.404791] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:28.511 [2024-12-11 13:47:21.405061] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61712 ] 00:06:28.770 [2024-12-11 13:47:21.586591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.770 [2024-12-11 13:47:21.701263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.770 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:30.147 [2024-12-11T13:47:23.194Z] ====================================== 00:06:30.147 [2024-12-11T13:47:23.194Z] busy:2498057060 (cyc) 00:06:30.147 [2024-12-11T13:47:23.194Z] total_run_count: 398000 00:06:30.147 [2024-12-11T13:47:23.194Z] tsc_hz: 2490000000 (cyc) 00:06:30.147 [2024-12-11T13:47:23.194Z] ====================================== 00:06:30.147 [2024-12-11T13:47:23.194Z] poller_cost: 6276 (cyc), 2520 (nsec) 00:06:30.147 00:06:30.147 real 0m1.579s 00:06:30.147 user 0m1.361s 00:06:30.147 sys 0m0.109s 00:06:30.147 13:47:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.147 ************************************ 00:06:30.147 END TEST thread_poller_perf 00:06:30.147 ************************************ 00:06:30.147 13:47:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.147 13:47:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:30.147 13:47:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:30.147 13:47:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.147 13:47:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.147 ************************************ 00:06:30.147 START TEST thread_poller_perf 00:06:30.147 ************************************ 00:06:30.147 13:47:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:30.147 [2024-12-11 13:47:23.055443] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:30.147 [2024-12-11 13:47:23.055561] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61754 ] 00:06:30.406 [2024-12-11 13:47:23.234971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.406 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:30.406 [2024-12-11 13:47:23.346556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.784 [2024-12-11T13:47:24.831Z] ====================================== 00:06:31.784 [2024-12-11T13:47:24.831Z] busy:2493986270 (cyc) 00:06:31.784 [2024-12-11T13:47:24.831Z] total_run_count: 4730000 00:06:31.784 [2024-12-11T13:47:24.831Z] tsc_hz: 2490000000 (cyc) 00:06:31.784 [2024-12-11T13:47:24.831Z] ====================================== 00:06:31.784 [2024-12-11T13:47:24.831Z] poller_cost: 527 (cyc), 211 (nsec) 00:06:31.784 00:06:31.784 real 0m1.570s 00:06:31.784 user 0m1.357s 00:06:31.784 sys 0m0.105s 00:06:31.784 13:47:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.784 ************************************ 00:06:31.784 END TEST thread_poller_perf 00:06:31.784 ************************************ 00:06:31.784 13:47:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.784 13:47:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:31.784 00:06:31.784 real 0m3.526s 00:06:31.784 user 0m2.911s 00:06:31.784 sys 0m0.407s 00:06:31.784 ************************************ 00:06:31.784 END TEST thread 00:06:31.784 ************************************ 00:06:31.784 13:47:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.784 13:47:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.784 13:47:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:31.784 13:47:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:31.784 13:47:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.784 13:47:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.784 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:06:31.784 ************************************ 00:06:31.784 START TEST app_cmdline 00:06:31.784 ************************************ 00:06:31.784 13:47:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:32.044 * Looking for test storage... 00:06:32.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:32.044 13:47:24 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:32.044 13:47:24 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:32.044 13:47:24 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:32.044 13:47:24 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.044 13:47:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:32.045 13:47:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.045 13:47:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:32.045 13:47:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:32.045 13:47:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.045 13:47:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:32.045 13:47:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.045 13:47:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.045 13:47:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.045 13:47:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:32.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.045 --rc genhtml_branch_coverage=1 00:06:32.045 --rc genhtml_function_coverage=1 00:06:32.045 --rc genhtml_legend=1 00:06:32.045 --rc geninfo_all_blocks=1 00:06:32.045 --rc geninfo_unexecuted_blocks=1 00:06:32.045 00:06:32.045 ' 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:32.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.045 --rc genhtml_branch_coverage=1 00:06:32.045 --rc genhtml_function_coverage=1 00:06:32.045 --rc genhtml_legend=1 00:06:32.045 --rc geninfo_all_blocks=1 00:06:32.045 --rc geninfo_unexecuted_blocks=1 00:06:32.045 00:06:32.045 ' 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:32.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.045 --rc genhtml_branch_coverage=1 00:06:32.045 --rc genhtml_function_coverage=1 00:06:32.045 --rc genhtml_legend=1 00:06:32.045 --rc geninfo_all_blocks=1 00:06:32.045 --rc geninfo_unexecuted_blocks=1 00:06:32.045 00:06:32.045 ' 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:32.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.045 --rc genhtml_branch_coverage=1 00:06:32.045 --rc genhtml_function_coverage=1 00:06:32.045 --rc genhtml_legend=1 00:06:32.045 --rc geninfo_all_blocks=1 00:06:32.045 --rc geninfo_unexecuted_blocks=1 00:06:32.045 00:06:32.045 ' 00:06:32.045 13:47:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:32.045 13:47:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61842 00:06:32.045 13:47:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:32.045 13:47:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61842 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61842 ']' 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.045 13:47:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.045 [2024-12-11 13:47:25.044473] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:32.045 [2024-12-11 13:47:25.044787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61842 ] 00:06:32.303 [2024-12-11 13:47:25.227423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.303 [2024-12-11 13:47:25.340312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.241 13:47:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.241 13:47:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:33.241 13:47:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:33.500 { 00:06:33.500 "version": "SPDK v25.01-pre git sha1 4dfeb7f95", 00:06:33.500 "fields": { 00:06:33.500 "major": 25, 00:06:33.500 "minor": 1, 00:06:33.500 "patch": 0, 00:06:33.500 "suffix": "-pre", 00:06:33.500 "commit": "4dfeb7f95" 00:06:33.500 } 00:06:33.500 } 00:06:33.500 13:47:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:33.500 13:47:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:33.500 13:47:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:33.500 13:47:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:33.500 13:47:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:33.500 13:47:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:33.500 13:47:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.500 13:47:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:33.500 13:47:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:33.500 13:47:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:33.500 13:47:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:33.760 request: 00:06:33.760 { 00:06:33.760 "method": "env_dpdk_get_mem_stats", 00:06:33.760 "req_id": 1 00:06:33.760 } 00:06:33.760 Got JSON-RPC error response 00:06:33.760 response: 00:06:33.760 { 00:06:33.760 "code": -32601, 00:06:33.760 "message": "Method not found" 00:06:33.760 } 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.760 13:47:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61842 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61842 ']' 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61842 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61842 00:06:33.760 killing process with pid 61842 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61842' 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 61842 00:06:33.760 13:47:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 61842 00:06:36.294 00:06:36.294 real 0m4.406s 00:06:36.294 user 0m4.554s 00:06:36.294 sys 0m0.655s 00:06:36.294 13:47:29 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.294 13:47:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.294 ************************************ 00:06:36.294 END TEST app_cmdline 00:06:36.294 ************************************ 00:06:36.294 13:47:29 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:36.294 13:47:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.294 13:47:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.294 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:06:36.294 ************************************ 00:06:36.294 START TEST version 00:06:36.294 ************************************ 00:06:36.294 13:47:29 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:36.294 * Looking for test storage... 00:06:36.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:36.294 13:47:29 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:36.294 13:47:29 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:36.294 13:47:29 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:36.554 13:47:29 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:36.554 13:47:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.554 13:47:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.554 13:47:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.554 13:47:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.554 13:47:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.554 13:47:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.554 13:47:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.554 13:47:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.554 13:47:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.554 13:47:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.554 13:47:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.554 13:47:29 version -- scripts/common.sh@344 -- # case "$op" in 00:06:36.554 13:47:29 version -- scripts/common.sh@345 -- # : 1 00:06:36.554 13:47:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.554 13:47:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.554 13:47:29 version -- scripts/common.sh@365 -- # decimal 1 00:06:36.554 13:47:29 version -- scripts/common.sh@353 -- # local d=1 00:06:36.554 13:47:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.554 13:47:29 version -- scripts/common.sh@355 -- # echo 1 00:06:36.554 13:47:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.554 13:47:29 version -- scripts/common.sh@366 -- # decimal 2 00:06:36.554 13:47:29 version -- scripts/common.sh@353 -- # local d=2 00:06:36.554 13:47:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.554 13:47:29 version -- scripts/common.sh@355 -- # echo 2 00:06:36.554 13:47:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.554 13:47:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.554 13:47:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.554 13:47:29 version -- scripts/common.sh@368 -- # return 0 00:06:36.554 13:47:29 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.554 13:47:29 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.554 --rc genhtml_branch_coverage=1 00:06:36.554 --rc genhtml_function_coverage=1 00:06:36.554 --rc genhtml_legend=1 00:06:36.554 --rc geninfo_all_blocks=1 00:06:36.554 --rc geninfo_unexecuted_blocks=1 00:06:36.554 00:06:36.554 ' 00:06:36.554 13:47:29 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.554 --rc genhtml_branch_coverage=1 00:06:36.554 --rc genhtml_function_coverage=1 00:06:36.554 --rc genhtml_legend=1 00:06:36.554 --rc geninfo_all_blocks=1 00:06:36.554 --rc geninfo_unexecuted_blocks=1 00:06:36.554 00:06:36.554 ' 00:06:36.554 13:47:29 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.554 --rc genhtml_branch_coverage=1 00:06:36.554 --rc genhtml_function_coverage=1 00:06:36.554 --rc genhtml_legend=1 00:06:36.554 --rc geninfo_all_blocks=1 00:06:36.554 --rc geninfo_unexecuted_blocks=1 00:06:36.554 00:06:36.554 ' 00:06:36.554 13:47:29 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.554 --rc genhtml_branch_coverage=1 00:06:36.554 --rc genhtml_function_coverage=1 00:06:36.554 --rc genhtml_legend=1 00:06:36.554 --rc geninfo_all_blocks=1 00:06:36.554 --rc geninfo_unexecuted_blocks=1 00:06:36.554 00:06:36.554 ' 00:06:36.554 13:47:29 version -- app/version.sh@17 -- # get_header_version major 00:06:36.554 13:47:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.554 13:47:29 version -- app/version.sh@14 -- # cut -f2 00:06:36.554 13:47:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.554 13:47:29 version -- app/version.sh@17 -- # major=25 00:06:36.554 13:47:29 version -- app/version.sh@18 -- # get_header_version minor 00:06:36.554 13:47:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.554 13:47:29 version -- app/version.sh@14 -- # cut -f2 00:06:36.554 13:47:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.554 13:47:29 version -- app/version.sh@18 -- # minor=1 00:06:36.554 13:47:29 version -- app/version.sh@19 -- # get_header_version patch 00:06:36.554 13:47:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.554 13:47:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.554 13:47:29 version -- app/version.sh@14 -- # cut -f2 00:06:36.554 13:47:29 version -- app/version.sh@19 -- # patch=0 00:06:36.554 13:47:29 version -- app/version.sh@20 -- # get_header_version suffix 00:06:36.554 13:47:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.554 13:47:29 version -- app/version.sh@14 -- # cut -f2 00:06:36.554 13:47:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.554 13:47:29 version -- app/version.sh@20 -- # suffix=-pre 00:06:36.554 13:47:29 version -- app/version.sh@22 -- # version=25.1 00:06:36.554 13:47:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:36.554 13:47:29 version -- app/version.sh@28 -- # version=25.1rc0 00:06:36.554 13:47:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:36.554 13:47:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:36.554 13:47:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:36.554 13:47:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:36.554 00:06:36.554 real 0m0.322s 00:06:36.554 user 0m0.187s 00:06:36.554 sys 0m0.193s 00:06:36.554 ************************************ 00:06:36.554 END TEST version 00:06:36.554 ************************************ 00:06:36.554 13:47:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.554 13:47:29 version -- common/autotest_common.sh@10 -- # set +x 00:06:36.555 13:47:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:36.555 13:47:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:36.555 13:47:29 -- spdk/autotest.sh@194 -- # uname -s 00:06:36.555 13:47:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:36.555 13:47:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:36.555 13:47:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:36.555 13:47:29 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:36.555 13:47:29 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:36.555 13:47:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.555 13:47:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.555 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:06:36.555 ************************************ 00:06:36.555 START TEST blockdev_nvme 00:06:36.555 ************************************ 00:06:36.555 13:47:29 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:36.815 * Looking for test storage... 00:06:36.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.815 13:47:29 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:36.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.815 --rc genhtml_branch_coverage=1 00:06:36.815 --rc genhtml_function_coverage=1 00:06:36.815 --rc genhtml_legend=1 00:06:36.815 --rc geninfo_all_blocks=1 00:06:36.815 --rc geninfo_unexecuted_blocks=1 00:06:36.815 00:06:36.815 ' 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:36.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.815 --rc genhtml_branch_coverage=1 00:06:36.815 --rc genhtml_function_coverage=1 00:06:36.815 --rc genhtml_legend=1 00:06:36.815 --rc geninfo_all_blocks=1 00:06:36.815 --rc geninfo_unexecuted_blocks=1 00:06:36.815 00:06:36.815 ' 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:36.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.815 --rc genhtml_branch_coverage=1 00:06:36.815 --rc genhtml_function_coverage=1 00:06:36.815 --rc genhtml_legend=1 00:06:36.815 --rc geninfo_all_blocks=1 00:06:36.815 --rc geninfo_unexecuted_blocks=1 00:06:36.815 00:06:36.815 ' 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:36.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.815 --rc genhtml_branch_coverage=1 00:06:36.815 --rc genhtml_function_coverage=1 00:06:36.815 --rc genhtml_legend=1 00:06:36.815 --rc geninfo_all_blocks=1 00:06:36.815 --rc geninfo_unexecuted_blocks=1 00:06:36.815 00:06:36.815 ' 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:36.815 13:47:29 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62026 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:36.815 13:47:29 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 62026 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 62026 ']' 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.815 13:47:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.075 [2024-12-11 13:47:29.945694] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:37.075 [2024-12-11 13:47:29.946035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62026 ] 00:06:37.335 [2024-12-11 13:47:30.128121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.335 [2024-12-11 13:47:30.238009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.271 13:47:31 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.271 13:47:31 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:38.271 13:47:31 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:38.271 13:47:31 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:38.271 13:47:31 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:38.271 13:47:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:38.271 13:47:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:38.271 13:47:31 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:38.271 13:47:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.271 13:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.530 13:47:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.530 13:47:31 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:38.530 13:47:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.530 13:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.530 13:47:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.530 13:47:31 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:38.530 13:47:31 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:38.530 13:47:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.530 13:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.530 13:47:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.530 13:47:31 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:38.530 13:47:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.530 13:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.789 13:47:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.789 13:47:31 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:38.789 13:47:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.789 13:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.789 13:47:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.789 13:47:31 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:38.789 13:47:31 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:38.789 13:47:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.789 13:47:31 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:38.789 13:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.789 13:47:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.789 13:47:31 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:38.789 13:47:31 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:38.790 13:47:31 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "75362ed7-104d-4803-99aa-358ebba43a85"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "75362ed7-104d-4803-99aa-358ebba43a85",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "18fac5a2-fd99-4ec7-816a-b13c0c6176c4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "18fac5a2-fd99-4ec7-816a-b13c0c6176c4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7dc9e268-82ec-436e-9861-36f0f7d05126"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7dc9e268-82ec-436e-9861-36f0f7d05126",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f3ec1fa7-679e-4f12-9a02-985e0f84c1db"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f3ec1fa7-679e-4f12-9a02-985e0f84c1db",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "693bf6ad-152b-48c1-b2f2-bb09f76f11ae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "693bf6ad-152b-48c1-b2f2-bb09f76f11ae",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "00de5b5e-1560-48ab-925f-fd6c0ccbfe9e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "00de5b5e-1560-48ab-925f-fd6c0ccbfe9e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:38.790 13:47:31 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:38.790 13:47:31 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:38.790 13:47:31 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:38.790 13:47:31 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 62026 00:06:38.790 13:47:31 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 62026 ']' 00:06:38.790 13:47:31 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 62026 00:06:38.790 13:47:31 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:38.790 13:47:31 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.790 13:47:31 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62026 00:06:38.790 killing process with pid 62026 00:06:38.790 13:47:31 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.790 13:47:31 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.790 13:47:31 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62026' 00:06:38.790 13:47:31 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 62026 00:06:38.790 13:47:31 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 62026 00:06:41.351 13:47:34 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:41.351 13:47:34 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:41.351 13:47:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:41.351 13:47:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.351 13:47:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.351 ************************************ 00:06:41.351 START TEST bdev_hello_world 00:06:41.351 ************************************ 00:06:41.351 13:47:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:41.351 [2024-12-11 13:47:34.281565] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:41.351 [2024-12-11 13:47:34.281684] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62124 ] 00:06:41.609 [2024-12-11 13:47:34.461553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.609 [2024-12-11 13:47:34.559036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.185 [2024-12-11 13:47:35.208448] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:42.185 [2024-12-11 13:47:35.208503] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:42.185 [2024-12-11 13:47:35.208525] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:42.185 [2024-12-11 13:47:35.211509] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:42.185 [2024-12-11 13:47:35.212196] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:42.185 [2024-12-11 13:47:35.212234] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:42.185 [2024-12-11 13:47:35.212498] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:42.185 00:06:42.185 [2024-12-11 13:47:35.212520] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:43.564 ************************************ 00:06:43.565 END TEST bdev_hello_world 00:06:43.565 ************************************ 00:06:43.565 00:06:43.565 real 0m2.108s 00:06:43.565 user 0m1.737s 00:06:43.565 sys 0m0.263s 00:06:43.565 13:47:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.565 13:47:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:43.565 13:47:36 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:43.565 13:47:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.565 13:47:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.565 13:47:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.565 ************************************ 00:06:43.565 START TEST bdev_bounds 00:06:43.565 ************************************ 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62171 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62171' 00:06:43.565 Process bdevio pid: 62171 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62171 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62171 ']' 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.565 13:47:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:43.565 [2024-12-11 13:47:36.477552] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:43.565 [2024-12-11 13:47:36.477816] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62171 ] 00:06:43.824 [2024-12-11 13:47:36.660285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.824 [2024-12-11 13:47:36.765862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.824 [2024-12-11 13:47:36.765963] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.824 [2024-12-11 13:47:36.765991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.759 13:47:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.759 13:47:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:44.760 13:47:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:44.760 I/O targets: 00:06:44.760 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:44.760 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:44.760 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.760 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.760 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.760 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:44.760 00:06:44.760 00:06:44.760 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.760 http://cunit.sourceforge.net/ 00:06:44.760 00:06:44.760 00:06:44.760 Suite: bdevio tests on: Nvme3n1 00:06:44.760 Test: blockdev write read block ...passed 00:06:44.760 Test: blockdev write zeroes read block ...passed 00:06:44.760 Test: blockdev write zeroes read no split ...passed 00:06:44.760 Test: blockdev write zeroes read split ...passed 00:06:44.760 Test: blockdev write zeroes read split partial ...passed 00:06:44.760 Test: blockdev reset ...[2024-12-11 13:47:37.635722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:44.760 passed 00:06:44.760 Test: blockdev write read 8 blocks ...[2024-12-11 13:47:37.639700] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:44.760 passed 00:06:44.760 Test: blockdev write read size > 128k ...passed 00:06:44.760 Test: blockdev write read invalid size ...passed 00:06:44.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.760 Test: blockdev write read max offset ...passed 00:06:44.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.760 Test: blockdev writev readv 8 blocks ...passed 00:06:44.760 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.760 Test: blockdev writev readv block ...passed 00:06:44.760 Test: blockdev writev readv size > 128k ...passed 00:06:44.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.760 Test: blockdev comparev and writev ...[2024-12-11 13:47:37.648369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5e0a000 len:0x1000 00:06:44.760 [2024-12-11 13:47:37.648421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.760 passed 00:06:44.760 Test: blockdev nvme passthru rw ...passed 00:06:44.760 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.760 Test: blockdev nvme admin passthru ...[2024-12-11 13:47:37.649262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.760 [2024-12-11 13:47:37.649307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.760 passed 00:06:44.760 Test: blockdev copy ...passed 00:06:44.760 Suite: bdevio tests on: Nvme2n3 00:06:44.760 Test: blockdev write read block ...passed 00:06:44.760 Test: blockdev write zeroes read block ...passed 00:06:44.760 Test: blockdev write zeroes read no split ...passed 00:06:44.760 Test: blockdev write zeroes read split ...passed 00:06:44.760 Test: blockdev write zeroes read split partial ...passed 00:06:44.760 Test: blockdev reset ...[2024-12-11 13:47:37.725343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:44.760 [2024-12-11 13:47:37.729271] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:44.760 Test: blockdev write read 8 blocks ...uccessful. 00:06:44.760 passed 00:06:44.760 Test: blockdev write read size > 128k ...passed 00:06:44.760 Test: blockdev write read invalid size ...passed 00:06:44.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.760 Test: blockdev write read max offset ...passed 00:06:44.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.760 Test: blockdev writev readv 8 blocks ...passed 00:06:44.760 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.760 Test: blockdev writev readv block ...passed 00:06:44.760 Test: blockdev writev readv size > 128k ...passed 00:06:44.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.760 Test: blockdev comparev and writev ...[2024-12-11 13:47:37.738719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x298806000 len:0x1000 00:06:44.760 [2024-12-11 13:47:37.738768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.760 passed 00:06:44.760 Test: blockdev nvme passthru rw ...passed 00:06:44.760 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:47:37.739679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.760 passed 00:06:44.760 Test: blockdev nvme admin passthru ...[2024-12-11 13:47:37.739715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.760 passed 00:06:44.760 Test: blockdev copy ...passed 00:06:44.760 Suite: bdevio tests on: Nvme2n2 00:06:44.760 Test: blockdev write read block ...passed 00:06:44.760 Test: blockdev write zeroes read block ...passed 00:06:44.760 Test: blockdev write zeroes read no split ...passed 00:06:44.760 Test: blockdev write zeroes read split ...passed 00:06:45.020 Test: blockdev write zeroes read split partial ...passed 00:06:45.020 Test: blockdev reset ...[2024-12-11 13:47:37.813788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:45.020 passed 00:06:45.020 Test: blockdev write read 8 blocks ...[2024-12-11 13:47:37.817751] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:45.020 passed 00:06:45.020 Test: blockdev write read size > 128k ...passed 00:06:45.020 Test: blockdev write read invalid size ...passed 00:06:45.020 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.020 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.020 Test: blockdev write read max offset ...passed 00:06:45.020 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.020 Test: blockdev writev readv 8 blocks ...passed 00:06:45.020 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.020 Test: blockdev writev readv block ...passed 00:06:45.020 Test: blockdev writev readv size > 128k ...passed 00:06:45.020 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:45.020 Test: blockdev comparev and writev ...[2024-12-11 13:47:37.826763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5e3c000 len:0x1000 00:06:45.020 [2024-12-11 13:47:37.826812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:45.020 passed 00:06:45.020 Test: blockdev nvme passthru rw ...passed 00:06:45.020 Test: blockdev nvme passthru vendor specific ...passed 00:06:45.020 Test: blockdev nvme admin passthru ...[2024-12-11 13:47:37.827665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:45.020 [2024-12-11 13:47:37.827702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:45.020 passed 00:06:45.020 Test: blockdev copy ...passed 00:06:45.020 Suite: bdevio tests on: Nvme2n1 00:06:45.020 Test: blockdev write read block ...passed 00:06:45.020 Test: blockdev write zeroes read block ...passed 00:06:45.020 Test: blockdev write zeroes read no split ...passed 00:06:45.020 Test: blockdev write zeroes read split ...passed 00:06:45.020 Test: blockdev write zeroes read split partial ...passed 00:06:45.020 Test: blockdev reset ...[2024-12-11 13:47:37.908140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:45.020 [2024-12-11 13:47:37.912307] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:06:45.020 00:06:45.020 Test: blockdev write read 8 blocks ...passed 00:06:45.020 Test: blockdev write read size > 128k ...passed 00:06:45.020 Test: blockdev write read invalid size ...passed 00:06:45.020 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.020 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.020 Test: blockdev write read max offset ...passed 00:06:45.020 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.020 Test: blockdev writev readv 8 blocks ...passed 00:06:45.020 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.020 Test: blockdev writev readv block ...passed 00:06:45.020 Test: blockdev writev readv size > 128k ...passed 00:06:45.020 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:45.020 Test: blockdev comparev and writev ...[2024-12-11 13:47:37.922442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5e38000 len:0x1000 00:06:45.020 [2024-12-11 13:47:37.922638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:45.020 passed 00:06:45.020 Test: blockdev nvme passthru rw ...passed 00:06:45.020 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:47:37.923927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:45.020 [2024-12-11 13:47:37.924031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:45.020 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:45.020 passed 00:06:45.020 Test: blockdev copy ...passed 00:06:45.020 Suite: bdevio tests on: Nvme1n1 00:06:45.020 Test: blockdev write read block ...passed 00:06:45.020 Test: blockdev write zeroes read block ...passed 00:06:45.020 Test: blockdev write zeroes read no split ...passed 00:06:45.020 Test: blockdev write zeroes read split ...passed 00:06:45.020 Test: blockdev write zeroes read split partial ...passed 00:06:45.020 Test: blockdev reset ...[2024-12-11 13:47:38.000019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:45.020 [2024-12-11 13:47:38.003704] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:45.020 passed 00:06:45.020 Test: blockdev write read 8 blocks ...passed 00:06:45.020 Test: blockdev write read size > 128k ...passed 00:06:45.020 Test: blockdev write read invalid size ...passed 00:06:45.020 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.020 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.020 Test: blockdev write read max offset ...passed 00:06:45.020 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.020 Test: blockdev writev readv 8 blocks ...passed 00:06:45.020 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.020 Test: blockdev writev readv block ...passed 00:06:45.020 Test: blockdev writev readv size > 128k ...passed 00:06:45.020 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:45.020 Test: blockdev comparev and writev ...[2024-12-11 13:47:38.012604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5e34000 len:0x1000 00:06:45.020 [2024-12-11 13:47:38.012654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:45.020 passed 00:06:45.020 Test: blockdev nvme passthru rw ...passed 00:06:45.020 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:47:38.013635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:45.020 passed 00:06:45.020 Test: blockdev nvme admin passthru ...[2024-12-11 13:47:38.013678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:45.020 passed 00:06:45.020 Test: blockdev copy ...passed 00:06:45.020 Suite: bdevio tests on: Nvme0n1 00:06:45.020 Test: blockdev write read block ...passed 00:06:45.020 Test: blockdev write zeroes read block ...passed 00:06:45.020 Test: blockdev write zeroes read no split ...passed 00:06:45.020 Test: blockdev write zeroes read split ...passed 00:06:45.280 Test: blockdev write zeroes read split partial ...passed 00:06:45.280 Test: blockdev reset ...[2024-12-11 13:47:38.091123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:45.280 passed 00:06:45.280 Test: blockdev write read 8 blocks ...[2024-12-11 13:47:38.094733] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:45.280 passed 00:06:45.280 Test: blockdev write read size > 128k ...passed 00:06:45.280 Test: blockdev write read invalid size ...passed 00:06:45.280 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.280 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.280 Test: blockdev write read max offset ...passed 00:06:45.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.280 Test: blockdev writev readv 8 blocks ...passed 00:06:45.280 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.280 Test: blockdev writev readv block ...passed 00:06:45.280 Test: blockdev writev readv size > 128k ...passed 00:06:45.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:45.280 Test: blockdev comparev and writev ...passed 00:06:45.280 Test: blockdev nvme passthru rw ...[2024-12-11 13:47:38.101971] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:45.280 separate metadata which is not supported yet. 00:06:45.280 passed 00:06:45.280 Test: blockdev nvme passthru vendor specific ...passed 00:06:45.280 Test: blockdev nvme admin passthru ...[2024-12-11 13:47:38.102556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:45.280 [2024-12-11 13:47:38.102600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:45.280 passed 00:06:45.280 Test: blockdev copy ...passed 00:06:45.280 00:06:45.280 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.280 suites 6 6 n/a 0 0 00:06:45.280 tests 138 138 138 0 0 00:06:45.280 asserts 893 893 893 0 n/a 00:06:45.280 00:06:45.280 Elapsed time = 1.460 seconds 00:06:45.280 0 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62171 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62171 ']' 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62171 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62171 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.280 killing process with pid 62171 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62171' 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62171 00:06:45.280 13:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62171 00:06:46.216 13:47:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:46.216 00:06:46.216 real 0m2.820s 00:06:46.216 user 0m7.237s 00:06:46.216 sys 0m0.406s 00:06:46.216 13:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.216 ************************************ 00:06:46.216 END TEST bdev_bounds 00:06:46.216 ************************************ 00:06:46.216 13:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:46.476 13:47:39 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:46.476 13:47:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:46.476 13:47:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.476 13:47:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.476 ************************************ 00:06:46.476 START TEST bdev_nbd 00:06:46.476 ************************************ 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62230 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62230 /var/tmp/spdk-nbd.sock 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62230 ']' 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.476 13:47:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:46.476 [2024-12-11 13:47:39.382444] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:46.476 [2024-12-11 13:47:39.382559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.735 [2024-12-11 13:47:39.565589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.735 [2024-12-11 13:47:39.670629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.670 1+0 records in 00:06:47.670 1+0 records out 00:06:47.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615789 s, 6.7 MB/s 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:47.670 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.931 1+0 records in 00:06:47.931 1+0 records out 00:06:47.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603389 s, 6.8 MB/s 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:47.931 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:47.932 13:47:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.190 1+0 records in 00:06:48.190 1+0 records out 00:06:48.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718883 s, 5.7 MB/s 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.190 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.449 1+0 records in 00:06:48.449 1+0 records out 00:06:48.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839344 s, 4.9 MB/s 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.449 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.708 1+0 records in 00:06:48.708 1+0 records out 00:06:48.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803446 s, 5.1 MB/s 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.708 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.967 1+0 records in 00:06:48.967 1+0 records out 00:06:48.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746505 s, 5.5 MB/s 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.967 13:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd0", 00:06:49.226 "bdev_name": "Nvme0n1" 00:06:49.226 }, 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd1", 00:06:49.226 "bdev_name": "Nvme1n1" 00:06:49.226 }, 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd2", 00:06:49.226 "bdev_name": "Nvme2n1" 00:06:49.226 }, 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd3", 00:06:49.226 "bdev_name": "Nvme2n2" 00:06:49.226 }, 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd4", 00:06:49.226 "bdev_name": "Nvme2n3" 00:06:49.226 }, 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd5", 00:06:49.226 "bdev_name": "Nvme3n1" 00:06:49.226 } 00:06:49.226 ]' 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd0", 00:06:49.226 "bdev_name": "Nvme0n1" 00:06:49.226 }, 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd1", 00:06:49.226 "bdev_name": "Nvme1n1" 00:06:49.226 }, 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd2", 00:06:49.226 "bdev_name": "Nvme2n1" 00:06:49.226 }, 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd3", 00:06:49.226 "bdev_name": "Nvme2n2" 00:06:49.226 }, 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd4", 00:06:49.226 "bdev_name": "Nvme2n3" 00:06:49.226 }, 00:06:49.226 { 00:06:49.226 "nbd_device": "/dev/nbd5", 00:06:49.226 "bdev_name": "Nvme3n1" 00:06:49.226 } 00:06:49.226 ]' 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.226 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.486 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.486 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.486 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.486 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.486 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.486 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.486 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.486 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.486 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.486 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.745 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.745 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.745 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.745 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.745 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.745 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.745 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.745 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.745 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.745 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:50.004 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:50.004 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:50.004 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:50.004 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.004 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.004 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:50.004 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.004 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.004 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.004 13:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:50.004 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:50.004 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:50.004 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:50.004 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.004 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.004 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:50.004 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.004 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.004 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.004 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:50.264 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:50.264 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:50.264 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:50.264 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.264 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.264 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:50.264 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.264 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.264 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.264 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.522 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:50.782 13:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:51.041 /dev/nbd0 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.041 1+0 records in 00:06:51.041 1+0 records out 00:06:51.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601431 s, 6.8 MB/s 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.041 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:51.300 /dev/nbd1 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.300 1+0 records in 00:06:51.300 1+0 records out 00:06:51.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702112 s, 5.8 MB/s 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.300 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:51.559 /dev/nbd10 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.559 1+0 records in 00:06:51.559 1+0 records out 00:06:51.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710355 s, 5.8 MB/s 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.559 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:51.819 /dev/nbd11 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.819 1+0 records in 00:06:51.819 1+0 records out 00:06:51.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738258 s, 5.5 MB/s 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.819 13:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:52.078 /dev/nbd12 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.078 1+0 records in 00:06:52.078 1+0 records out 00:06:52.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576034 s, 7.1 MB/s 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:52.078 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:52.336 /dev/nbd13 00:06:52.336 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.337 1+0 records in 00:06:52.337 1+0 records out 00:06:52.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00094121 s, 4.4 MB/s 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.337 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd0", 00:06:52.596 "bdev_name": "Nvme0n1" 00:06:52.596 }, 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd1", 00:06:52.596 "bdev_name": "Nvme1n1" 00:06:52.596 }, 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd10", 00:06:52.596 "bdev_name": "Nvme2n1" 00:06:52.596 }, 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd11", 00:06:52.596 "bdev_name": "Nvme2n2" 00:06:52.596 }, 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd12", 00:06:52.596 "bdev_name": "Nvme2n3" 00:06:52.596 }, 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd13", 00:06:52.596 "bdev_name": "Nvme3n1" 00:06:52.596 } 00:06:52.596 ]' 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd0", 00:06:52.596 "bdev_name": "Nvme0n1" 00:06:52.596 }, 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd1", 00:06:52.596 "bdev_name": "Nvme1n1" 00:06:52.596 }, 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd10", 00:06:52.596 "bdev_name": "Nvme2n1" 00:06:52.596 }, 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd11", 00:06:52.596 "bdev_name": "Nvme2n2" 00:06:52.596 }, 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd12", 00:06:52.596 "bdev_name": "Nvme2n3" 00:06:52.596 }, 00:06:52.596 { 00:06:52.596 "nbd_device": "/dev/nbd13", 00:06:52.596 "bdev_name": "Nvme3n1" 00:06:52.596 } 00:06:52.596 ]' 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.596 /dev/nbd1 00:06:52.596 /dev/nbd10 00:06:52.596 /dev/nbd11 00:06:52.596 /dev/nbd12 00:06:52.596 /dev/nbd13' 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.596 /dev/nbd1 00:06:52.596 /dev/nbd10 00:06:52.596 /dev/nbd11 00:06:52.596 /dev/nbd12 00:06:52.596 /dev/nbd13' 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:52.596 256+0 records in 00:06:52.596 256+0 records out 00:06:52.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124648 s, 84.1 MB/s 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.596 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.856 256+0 records in 00:06:52.856 256+0 records out 00:06:52.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13625 s, 7.7 MB/s 00:06:52.856 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.856 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.856 256+0 records in 00:06:52.856 256+0 records out 00:06:52.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140725 s, 7.5 MB/s 00:06:52.856 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.856 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:53.114 256+0 records in 00:06:53.114 256+0 records out 00:06:53.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140251 s, 7.5 MB/s 00:06:53.114 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.114 13:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:53.114 256+0 records in 00:06:53.114 256+0 records out 00:06:53.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13964 s, 7.5 MB/s 00:06:53.114 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.114 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:53.373 256+0 records in 00:06:53.373 256+0 records out 00:06:53.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1345 s, 7.8 MB/s 00:06:53.373 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.373 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:53.632 256+0 records in 00:06:53.632 256+0 records out 00:06:53.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144002 s, 7.3 MB/s 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.632 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.891 13:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:54.150 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:54.150 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:54.150 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:54.150 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.150 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.150 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:54.150 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.150 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.150 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.150 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:54.409 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:54.409 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:54.409 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:54.409 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.409 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.409 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:54.409 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.409 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.409 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.409 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:54.667 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:54.668 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:54.668 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:54.668 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.668 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.668 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:54.668 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.668 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.668 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.668 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.927 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.186 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.186 13:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:55.186 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:55.444 malloc_lvol_verify 00:06:55.444 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:55.444 de83d535-cfe4-43c1-ad4a-162d0e2b163f 00:06:55.444 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:55.703 e70569f1-df87-44f3-86ac-93f00e2a03aa 00:06:55.703 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:55.963 /dev/nbd0 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:55.963 mke2fs 1.47.0 (5-Feb-2023) 00:06:55.963 Discarding device blocks: 0/4096 done 00:06:55.963 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:55.963 00:06:55.963 Allocating group tables: 0/1 done 00:06:55.963 Writing inode tables: 0/1 done 00:06:55.963 Creating journal (1024 blocks): done 00:06:55.963 Writing superblocks and filesystem accounting information: 0/1 done 00:06:55.963 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.963 13:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62230 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62230 ']' 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62230 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62230 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.222 killing process with pid 62230 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62230' 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62230 00:06:56.222 13:47:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62230 00:06:57.598 13:47:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:57.598 00:06:57.598 real 0m11.075s 00:06:57.598 user 0m14.397s 00:06:57.598 sys 0m4.412s 00:06:57.598 13:47:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.598 13:47:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:57.598 ************************************ 00:06:57.598 END TEST bdev_nbd 00:06:57.598 ************************************ 00:06:57.598 13:47:50 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:57.598 13:47:50 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:57.598 skipping fio tests on NVMe due to multi-ns failures. 00:06:57.598 13:47:50 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:57.598 13:47:50 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:57.598 13:47:50 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:57.598 13:47:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:57.598 13:47:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.598 13:47:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.598 ************************************ 00:06:57.598 START TEST bdev_verify 00:06:57.598 ************************************ 00:06:57.598 13:47:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:57.598 [2024-12-11 13:47:50.526971] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:57.598 [2024-12-11 13:47:50.527338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62620 ] 00:06:57.856 [2024-12-11 13:47:50.710519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.856 [2024-12-11 13:47:50.826055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.856 [2024-12-11 13:47:50.826082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.794 Running I/O for 5 seconds... 00:07:00.666 20800.00 IOPS, 81.25 MiB/s [2024-12-11T13:47:55.090Z] 20704.00 IOPS, 80.88 MiB/s [2024-12-11T13:47:56.025Z] 20373.33 IOPS, 79.58 MiB/s [2024-12-11T13:47:56.962Z] 20736.00 IOPS, 81.00 MiB/s [2024-12-11T13:47:56.962Z] 20825.60 IOPS, 81.35 MiB/s 00:07:03.915 Latency(us) 00:07:03.915 [2024-12-11T13:47:56.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.915 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0x0 length 0xbd0bd 00:07:03.915 Nvme0n1 : 5.05 1698.86 6.64 0.00 0.00 75143.35 16107.64 74537.33 00:07:03.915 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:03.915 Nvme0n1 : 5.04 1726.27 6.74 0.00 0.00 73974.01 16423.48 64851.69 00:07:03.915 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0x0 length 0xa0000 00:07:03.915 Nvme1n1 : 5.05 1698.42 6.63 0.00 0.00 74924.26 18318.50 69905.07 00:07:03.915 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0xa0000 length 0xa0000 00:07:03.915 Nvme1n1 : 5.04 1725.82 6.74 0.00 0.00 73889.98 17476.27 63588.34 00:07:03.915 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0x0 length 0x80000 00:07:03.915 Nvme2n1 : 5.05 1698.00 6.63 0.00 0.00 74769.28 17370.99 73273.99 00:07:03.915 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0x80000 length 0x80000 00:07:03.915 Nvme2n1 : 5.04 1725.39 6.74 0.00 0.00 73754.98 16212.92 64009.46 00:07:03.915 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0x0 length 0x80000 00:07:03.915 Nvme2n2 : 5.06 1706.50 6.67 0.00 0.00 74278.32 4632.26 78748.48 00:07:03.915 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0x80000 length 0x80000 00:07:03.915 Nvme2n2 : 5.05 1724.94 6.74 0.00 0.00 73684.19 15581.25 63588.34 00:07:03.915 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0x0 length 0x80000 00:07:03.915 Nvme2n3 : 5.07 1715.13 6.70 0.00 0.00 73857.20 9580.36 79590.71 00:07:03.915 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0x80000 length 0x80000 00:07:03.915 Nvme2n3 : 5.05 1724.51 6.74 0.00 0.00 73588.55 15054.86 62746.11 00:07:03.915 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0x0 length 0x20000 00:07:03.915 Nvme3n1 : 5.08 1714.77 6.70 0.00 0.00 73780.46 8211.74 77064.02 00:07:03.915 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.915 Verification LBA range: start 0x20000 length 0x20000 00:07:03.915 Nvme3n1 : 5.06 1743.88 6.81 0.00 0.00 72718.17 4290.11 64430.57 00:07:03.915 [2024-12-11T13:47:56.962Z] =================================================================================================================== 00:07:03.915 [2024-12-11T13:47:56.962Z] Total : 20602.51 80.48 0.00 0.00 74025.09 4290.11 79590.71 00:07:05.294 00:07:05.294 real 0m7.599s 00:07:05.294 user 0m14.037s 00:07:05.294 sys 0m0.322s 00:07:05.294 13:47:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.294 13:47:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:05.294 ************************************ 00:07:05.294 END TEST bdev_verify 00:07:05.294 ************************************ 00:07:05.294 13:47:58 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:05.294 13:47:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:05.294 13:47:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.294 13:47:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.294 ************************************ 00:07:05.294 START TEST bdev_verify_big_io 00:07:05.294 ************************************ 00:07:05.294 13:47:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:05.294 [2024-12-11 13:47:58.192866] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:05.294 [2024-12-11 13:47:58.192972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62724 ] 00:07:05.553 [2024-12-11 13:47:58.373699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.553 [2024-12-11 13:47:58.486009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.553 [2024-12-11 13:47:58.486035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.488 Running I/O for 5 seconds... 00:07:09.770 1170.00 IOPS, 73.12 MiB/s [2024-12-11T13:48:03.075Z] 2049.50 IOPS, 128.09 MiB/s [2024-12-11T13:48:04.980Z] 2008.00 IOPS, 125.50 MiB/s [2024-12-11T13:48:05.239Z] 2205.75 IOPS, 137.86 MiB/s 00:07:12.192 Latency(us) 00:07:12.192 [2024-12-11T13:48:05.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.192 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0x0 length 0xbd0b 00:07:12.192 Nvme0n1 : 5.48 163.98 10.25 0.00 0.00 737252.20 28425.25 754637.83 00:07:12.192 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:12.192 Nvme0n1 : 5.48 168.74 10.55 0.00 0.00 729930.68 24529.94 758006.75 00:07:12.192 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0x0 length 0xa000 00:07:12.192 Nvme1n1 : 5.53 173.48 10.84 0.00 0.00 703592.68 46954.31 656939.18 00:07:12.192 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0xa000 length 0xa000 00:07:12.192 Nvme1n1 : 5.54 173.33 10.83 0.00 0.00 701616.40 52639.36 656939.18 00:07:12.192 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0x0 length 0x8000 00:07:12.192 Nvme2n1 : 5.60 179.20 11.20 0.00 0.00 668394.98 20739.91 670414.86 00:07:12.192 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0x8000 length 0x8000 00:07:12.192 Nvme2n1 : 5.60 178.63 11.16 0.00 0.00 668848.62 19160.73 673783.78 00:07:12.192 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0x0 length 0x8000 00:07:12.192 Nvme2n2 : 5.60 178.98 11.19 0.00 0.00 653378.62 20634.63 687259.45 00:07:12.192 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0x8000 length 0x8000 00:07:12.192 Nvme2n2 : 5.60 182.82 11.43 0.00 0.00 641471.02 38532.01 687259.45 00:07:12.192 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0x0 length 0x8000 00:07:12.192 Nvme2n3 : 5.60 182.81 11.43 0.00 0.00 627412.92 41269.26 700735.13 00:07:12.192 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0x8000 length 0x8000 00:07:12.192 Nvme2n3 : 5.60 182.72 11.42 0.00 0.00 626287.55 38532.01 700735.13 00:07:12.192 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0x0 length 0x2000 00:07:12.192 Nvme3n1 : 5.68 202.80 12.67 0.00 0.00 555158.22 1223.87 714210.80 00:07:12.192 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.192 Verification LBA range: start 0x2000 length 0x2000 00:07:12.192 Nvme3n1 : 5.68 202.77 12.67 0.00 0.00 553932.89 1144.91 717579.72 00:07:12.192 [2024-12-11T13:48:05.239Z] =================================================================================================================== 00:07:12.192 [2024-12-11T13:48:05.239Z] Total : 2170.25 135.64 0.00 0.00 651483.63 1144.91 758006.75 00:07:14.097 ************************************ 00:07:14.097 END TEST bdev_verify_big_io 00:07:14.097 ************************************ 00:07:14.097 00:07:14.097 real 0m8.784s 00:07:14.097 user 0m16.354s 00:07:14.097 sys 0m0.358s 00:07:14.097 13:48:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.097 13:48:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:14.097 13:48:06 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:14.097 13:48:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:14.097 13:48:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.097 13:48:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.097 ************************************ 00:07:14.097 START TEST bdev_write_zeroes 00:07:14.097 ************************************ 00:07:14.097 13:48:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:14.097 [2024-12-11 13:48:07.051996] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:14.097 [2024-12-11 13:48:07.052383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62833 ] 00:07:14.356 [2024-12-11 13:48:07.232692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.356 [2024-12-11 13:48:07.348149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.294 Running I/O for 1 seconds... 00:07:16.228 81792.00 IOPS, 319.50 MiB/s 00:07:16.228 Latency(us) 00:07:16.228 [2024-12-11T13:48:09.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.228 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.228 Nvme0n1 : 1.02 13561.20 52.97 0.00 0.00 9408.93 4553.30 26530.24 00:07:16.228 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.228 Nvme1n1 : 1.02 13521.99 52.82 0.00 0.00 9417.38 8317.02 26635.51 00:07:16.228 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.228 Nvme2n1 : 1.03 13483.28 52.67 0.00 0.00 9411.35 8001.18 25898.56 00:07:16.228 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.228 Nvme2n2 : 1.03 13444.88 52.52 0.00 0.00 9394.35 7948.54 23898.27 00:07:16.228 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.228 Nvme2n3 : 1.03 13416.06 52.41 0.00 0.00 9392.71 8053.82 23582.43 00:07:16.228 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.228 Nvme3n1 : 1.03 13391.57 52.31 0.00 0.00 9377.28 8001.18 22634.92 00:07:16.228 [2024-12-11T13:48:09.275Z] =================================================================================================================== 00:07:16.228 [2024-12-11T13:48:09.275Z] Total : 80818.97 315.70 0.00 0.00 9400.34 4553.30 26635.51 00:07:17.601 00:07:17.601 real 0m3.289s 00:07:17.601 user 0m2.909s 00:07:17.601 sys 0m0.260s 00:07:17.602 13:48:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.602 13:48:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:17.602 ************************************ 00:07:17.602 END TEST bdev_write_zeroes 00:07:17.602 ************************************ 00:07:17.602 13:48:10 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:17.602 13:48:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:17.602 13:48:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.602 13:48:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.602 ************************************ 00:07:17.602 START TEST bdev_json_nonenclosed 00:07:17.602 ************************************ 00:07:17.602 13:48:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:17.602 [2024-12-11 13:48:10.411856] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:17.602 [2024-12-11 13:48:10.412200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62891 ] 00:07:17.602 [2024-12-11 13:48:10.592330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.859 [2024-12-11 13:48:10.704663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.859 [2024-12-11 13:48:10.704762] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:17.859 [2024-12-11 13:48:10.704784] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:17.859 [2024-12-11 13:48:10.704796] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.116 00:07:18.116 real 0m0.635s 00:07:18.116 user 0m0.381s 00:07:18.116 sys 0m0.150s 00:07:18.116 13:48:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.116 ************************************ 00:07:18.116 END TEST bdev_json_nonenclosed 00:07:18.116 ************************************ 00:07:18.116 13:48:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:18.116 13:48:11 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.116 13:48:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:18.116 13:48:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.116 13:48:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.116 ************************************ 00:07:18.116 START TEST bdev_json_nonarray 00:07:18.116 ************************************ 00:07:18.116 13:48:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.117 [2024-12-11 13:48:11.128439] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:18.117 [2024-12-11 13:48:11.128914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62912 ] 00:07:18.374 [2024-12-11 13:48:11.311200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.632 [2024-12-11 13:48:11.420192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.632 [2024-12-11 13:48:11.420306] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:18.632 [2024-12-11 13:48:11.420329] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:18.632 [2024-12-11 13:48:11.420341] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.632 00:07:18.632 real 0m0.638s 00:07:18.632 user 0m0.386s 00:07:18.632 sys 0m0.143s 00:07:18.632 13:48:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.632 13:48:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:18.632 ************************************ 00:07:18.632 END TEST bdev_json_nonarray 00:07:18.632 ************************************ 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:18.890 13:48:11 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:18.890 00:07:18.890 real 0m42.162s 00:07:18.890 user 1m2.170s 00:07:18.890 sys 0m7.520s 00:07:18.890 13:48:11 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.890 ************************************ 00:07:18.890 END TEST blockdev_nvme 00:07:18.890 13:48:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.890 ************************************ 00:07:18.890 13:48:11 -- spdk/autotest.sh@209 -- # uname -s 00:07:18.890 13:48:11 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:18.890 13:48:11 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:18.890 13:48:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.890 13:48:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.890 13:48:11 -- common/autotest_common.sh@10 -- # set +x 00:07:18.890 ************************************ 00:07:18.890 START TEST blockdev_nvme_gpt 00:07:18.890 ************************************ 00:07:18.890 13:48:11 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:19.149 * Looking for test storage... 00:07:19.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:19.149 13:48:11 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.149 13:48:11 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.149 13:48:11 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.149 13:48:12 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.149 --rc genhtml_branch_coverage=1 00:07:19.149 --rc genhtml_function_coverage=1 00:07:19.149 --rc genhtml_legend=1 00:07:19.149 --rc geninfo_all_blocks=1 00:07:19.149 --rc geninfo_unexecuted_blocks=1 00:07:19.149 00:07:19.149 ' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.149 --rc genhtml_branch_coverage=1 00:07:19.149 --rc genhtml_function_coverage=1 00:07:19.149 --rc genhtml_legend=1 00:07:19.149 --rc geninfo_all_blocks=1 00:07:19.149 --rc geninfo_unexecuted_blocks=1 00:07:19.149 00:07:19.149 ' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.149 --rc genhtml_branch_coverage=1 00:07:19.149 --rc genhtml_function_coverage=1 00:07:19.149 --rc genhtml_legend=1 00:07:19.149 --rc geninfo_all_blocks=1 00:07:19.149 --rc geninfo_unexecuted_blocks=1 00:07:19.149 00:07:19.149 ' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.149 --rc genhtml_branch_coverage=1 00:07:19.149 --rc genhtml_function_coverage=1 00:07:19.149 --rc genhtml_legend=1 00:07:19.149 --rc geninfo_all_blocks=1 00:07:19.149 --rc geninfo_unexecuted_blocks=1 00:07:19.149 00:07:19.149 ' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62996 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62996 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62996 ']' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.149 13:48:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.149 [2024-12-11 13:48:12.187024] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:19.149 [2024-12-11 13:48:12.187278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62996 ] 00:07:19.408 [2024-12-11 13:48:12.365310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.666 [2024-12-11 13:48:12.481304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.603 13:48:13 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.603 13:48:13 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:20.603 13:48:13 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:20.603 13:48:13 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:20.603 13:48:13 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:20.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:21.135 Waiting for block devices as requested 00:07:21.394 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:21.394 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:21.394 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:21.653 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:26.930 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:26.930 BYT; 00:07:26.930 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:26.930 BYT; 00:07:26.930 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:26.930 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:26.930 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:26.930 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:26.930 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:26.930 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:26.930 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:26.930 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:26.930 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:26.931 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:26.931 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:26.931 13:48:19 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:26.931 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:26.931 13:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:27.868 The operation has completed successfully. 00:07:27.868 13:48:20 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:28.805 The operation has completed successfully. 00:07:28.805 13:48:21 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:29.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.370 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.370 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.370 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.370 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.665 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:30.665 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.665 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.665 [] 00:07:30.665 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.665 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:30.665 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:30.665 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:30.665 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:30.665 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:30.665 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.665 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.924 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.924 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:30.924 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.924 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.924 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.924 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:30.924 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.924 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.924 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:31.183 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.183 13:48:24 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:31.183 13:48:24 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:31.184 13:48:24 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8fd6534c-1629-43fe-a7b6-a086303b1a73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8fd6534c-1629-43fe-a7b6-a086303b1a73",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "52b07a0c-a4b8-433c-ac5b-0b4a0e6c3562"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52b07a0c-a4b8-433c-ac5b-0b4a0e6c3562",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5978800b-2005-4224-9242-2e00bb84913c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5978800b-2005-4224-9242-2e00bb84913c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d3f02051-0dea-4aec-9bea-93c9e04ac9b9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d3f02051-0dea-4aec-9bea-93c9e04ac9b9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "da568dd0-da4f-4a96-86c7-9d0b12585d8d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "da568dd0-da4f-4a96-86c7-9d0b12585d8d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:31.184 13:48:24 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:31.184 13:48:24 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:31.184 13:48:24 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:31.184 13:48:24 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62996 00:07:31.184 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62996 ']' 00:07:31.184 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62996 00:07:31.184 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:31.184 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.184 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62996 00:07:31.184 killing process with pid 62996 00:07:31.184 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.184 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.184 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62996' 00:07:31.184 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62996 00:07:31.184 13:48:24 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62996 00:07:33.716 13:48:26 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:33.716 13:48:26 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:33.716 13:48:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:33.716 13:48:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.716 13:48:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.716 ************************************ 00:07:33.716 START TEST bdev_hello_world 00:07:33.716 ************************************ 00:07:33.716 13:48:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:33.716 [2024-12-11 13:48:26.626937] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:33.716 [2024-12-11 13:48:26.627452] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63645 ] 00:07:33.975 [2024-12-11 13:48:26.808353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.975 [2024-12-11 13:48:26.920535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.545 [2024-12-11 13:48:27.576667] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:34.545 [2024-12-11 13:48:27.576724] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:34.545 [2024-12-11 13:48:27.576765] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:34.545 [2024-12-11 13:48:27.579710] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:34.545 [2024-12-11 13:48:27.580330] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:34.545 [2024-12-11 13:48:27.580368] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:34.545 [2024-12-11 13:48:27.580661] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:34.545 00:07:34.545 [2024-12-11 13:48:27.580685] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:35.924 00:07:35.924 real 0m2.187s 00:07:35.924 user 0m1.812s 00:07:35.924 sys 0m0.267s 00:07:35.924 ************************************ 00:07:35.924 END TEST bdev_hello_world 00:07:35.924 ************************************ 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:35.924 13:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:35.924 13:48:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.924 13:48:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.924 13:48:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.924 ************************************ 00:07:35.924 START TEST bdev_bounds 00:07:35.924 ************************************ 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:35.924 Process bdevio pid: 63687 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63687 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63687' 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63687 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63687 ']' 00:07:35.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.924 13:48:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:35.924 [2024-12-11 13:48:28.887299] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:35.924 [2024-12-11 13:48:28.887417] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63687 ] 00:07:36.183 [2024-12-11 13:48:29.071544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.183 [2024-12-11 13:48:29.191272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.183 [2024-12-11 13:48:29.191337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.183 [2024-12-11 13:48:29.191370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.120 13:48:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.120 13:48:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:37.120 13:48:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:37.120 I/O targets: 00:07:37.120 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:37.120 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:37.120 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:37.120 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:37.120 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:37.120 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:37.120 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:37.120 00:07:37.120 00:07:37.120 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.120 http://cunit.sourceforge.net/ 00:07:37.120 00:07:37.120 00:07:37.120 Suite: bdevio tests on: Nvme3n1 00:07:37.120 Test: blockdev write read block ...passed 00:07:37.120 Test: blockdev write zeroes read block ...passed 00:07:37.120 Test: blockdev write zeroes read no split ...passed 00:07:37.120 Test: blockdev write zeroes read split ...passed 00:07:37.120 Test: blockdev write zeroes read split partial ...passed 00:07:37.120 Test: blockdev reset ...[2024-12-11 13:48:30.065781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:37.120 [2024-12-11 13:48:30.069989] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spasseduccessful. 00:07:37.120 00:07:37.120 Test: blockdev write read 8 blocks ...passed 00:07:37.120 Test: blockdev write read size > 128k ...passed 00:07:37.120 Test: blockdev write read invalid size ...passed 00:07:37.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.120 Test: blockdev write read max offset ...passed 00:07:37.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.120 Test: blockdev writev readv 8 blocks ...passed 00:07:37.120 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.120 Test: blockdev writev readv block ...passed 00:07:37.120 Test: blockdev writev readv size > 128k ...passed 00:07:37.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.120 Test: blockdev comparev and writev ...[2024-12-11 13:48:30.079461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3604000 len:0x1000 00:07:37.120 [2024-12-11 13:48:30.079509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.120 passed 00:07:37.120 Test: blockdev nvme passthru rw ...passed 00:07:37.120 Test: blockdev nvme passthru vendor specific ...passed 00:07:37.120 Test: blockdev nvme admin passthru ...[2024-12-11 13:48:30.080439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:37.120 [2024-12-11 13:48:30.080475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:37.120 passed 00:07:37.120 Test: blockdev copy ...passed 00:07:37.120 Suite: bdevio tests on: Nvme2n3 00:07:37.120 Test: blockdev write read block ...passed 00:07:37.120 Test: blockdev write zeroes read block ...passed 00:07:37.120 Test: blockdev write zeroes read no split ...passed 00:07:37.120 Test: blockdev write zeroes read split ...passed 00:07:37.120 Test: blockdev write zeroes read split partial ...passed 00:07:37.120 Test: blockdev reset ...[2024-12-11 13:48:30.157204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:37.120 [2024-12-11 13:48:30.161492] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:37.120 passed 00:07:37.120 Test: blockdev write read 8 blocks ...passed 00:07:37.120 Test: blockdev write read size > 128k ...passed 00:07:37.120 Test: blockdev write read invalid size ...passed 00:07:37.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.120 Test: blockdev write read max offset ...passed 00:07:37.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.379 Test: blockdev writev readv 8 blocks ...passed 00:07:37.379 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.379 Test: blockdev writev readv block ...passed 00:07:37.379 Test: blockdev writev readv size > 128k ...passed 00:07:37.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.379 Test: blockdev comparev and writev ...[2024-12-11 13:48:30.170414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3602000 len:0x1000 00:07:37.379 [2024-12-11 13:48:30.170459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.379 passed 00:07:37.379 Test: blockdev nvme passthru rw ...passed 00:07:37.379 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:48:30.171411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:37.379 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:37.379 [2024-12-11 13:48:30.171543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:37.379 passed 00:07:37.379 Test: blockdev copy ...passed 00:07:37.379 Suite: bdevio tests on: Nvme2n2 00:07:37.379 Test: blockdev write read block ...passed 00:07:37.379 Test: blockdev write zeroes read block ...passed 00:07:37.379 Test: blockdev write zeroes read no split ...passed 00:07:37.379 Test: blockdev write zeroes read split ...passed 00:07:37.379 Test: blockdev write zeroes read split partial ...passed 00:07:37.379 Test: blockdev reset ...[2024-12-11 13:48:30.248566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:37.379 [2024-12-11 13:48:30.252879] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:37.379 Test: blockdev write read 8 blocks ...uccessful. 00:07:37.379 passed 00:07:37.379 Test: blockdev write read size > 128k ...passed 00:07:37.379 Test: blockdev write read invalid size ...passed 00:07:37.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.379 Test: blockdev write read max offset ...passed 00:07:37.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.379 Test: blockdev writev readv 8 blocks ...passed 00:07:37.379 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.379 Test: blockdev writev readv block ...passed 00:07:37.379 Test: blockdev writev readv size > 128k ...passed 00:07:37.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.379 Test: blockdev comparev and writev ...[2024-12-11 13:48:30.263450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7438000 len:0x1000 00:07:37.379 [2024-12-11 13:48:30.263615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.379 passed 00:07:37.379 Test: blockdev nvme passthru rw ...passed 00:07:37.379 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:48:30.264850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:37.379 [2024-12-11 13:48:30.264941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:37.379 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:37.379 passed 00:07:37.379 Test: blockdev copy ...passed 00:07:37.379 Suite: bdevio tests on: Nvme2n1 00:07:37.379 Test: blockdev write read block ...passed 00:07:37.379 Test: blockdev write zeroes read block ...passed 00:07:37.379 Test: blockdev write zeroes read no split ...passed 00:07:37.379 Test: blockdev write zeroes read split ...passed 00:07:37.379 Test: blockdev write zeroes read split partial ...passed 00:07:37.379 Test: blockdev reset ...[2024-12-11 13:48:30.343208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:37.379 [2024-12-11 13:48:30.347462] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:37.379 passed 00:07:37.379 Test: blockdev write read 8 blocks ...passed 00:07:37.379 Test: blockdev write read size > 128k ...passed 00:07:37.380 Test: blockdev write read invalid size ...passed 00:07:37.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.380 Test: blockdev write read max offset ...passed 00:07:37.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.380 Test: blockdev writev readv 8 blocks ...passed 00:07:37.380 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.380 Test: blockdev writev readv block ...passed 00:07:37.380 Test: blockdev writev readv size > 128k ...passed 00:07:37.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.380 Test: blockdev comparev and writev ...[2024-12-11 13:48:30.356019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7434000 len:0x1000 00:07:37.380 [2024-12-11 13:48:30.356064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.380 passed 00:07:37.380 Test: blockdev nvme passthru rw ...passed 00:07:37.380 Test: blockdev nvme passthru vendor specific ...passed 00:07:37.380 Test: blockdev nvme admin passthru ...[2024-12-11 13:48:30.356966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:37.380 [2024-12-11 13:48:30.357005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:37.380 passed 00:07:37.380 Test: blockdev copy ...passed 00:07:37.380 Suite: bdevio tests on: Nvme1n1p2 00:07:37.380 Test: blockdev write read block ...passed 00:07:37.380 Test: blockdev write zeroes read block ...passed 00:07:37.380 Test: blockdev write zeroes read no split ...passed 00:07:37.380 Test: blockdev write zeroes read split ...passed 00:07:37.639 Test: blockdev write zeroes read split partial ...passed 00:07:37.639 Test: blockdev reset ...[2024-12-11 13:48:30.436550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:37.639 [2024-12-11 13:48:30.440340] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:37.639 passed 00:07:37.639 Test: blockdev write read 8 blocks ...passed 00:07:37.639 Test: blockdev write read size > 128k ...passed 00:07:37.639 Test: blockdev write read invalid size ...passed 00:07:37.639 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.639 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.639 Test: blockdev write read max offset ...passed 00:07:37.639 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.639 Test: blockdev writev readv 8 blocks ...passed 00:07:37.639 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.639 Test: blockdev writev readv block ...passed 00:07:37.639 Test: blockdev writev readv size > 128k ...passed 00:07:37.639 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.639 Test: blockdev comparev and writev ...[2024-12-11 13:48:30.449583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c7430000 len:0x1000 00:07:37.639 [2024-12-11 13:48:30.449629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.639 passed 00:07:37.639 Test: blockdev nvme passthru rw ...passed 00:07:37.639 Test: blockdev nvme passthru vendor specific ...passed 00:07:37.639 Test: blockdev nvme admin passthru ...passed 00:07:37.639 Test: blockdev copy ...passed 00:07:37.639 Suite: bdevio tests on: Nvme1n1p1 00:07:37.639 Test: blockdev write read block ...passed 00:07:37.639 Test: blockdev write zeroes read block ...passed 00:07:37.639 Test: blockdev write zeroes read no split ...passed 00:07:37.639 Test: blockdev write zeroes read split ...passed 00:07:37.639 Test: blockdev write zeroes read split partial ...passed 00:07:37.639 Test: blockdev reset ...[2024-12-11 13:48:30.517950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:37.639 [2024-12-11 13:48:30.521807] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:37.639 Test: blockdev write read 8 blocks ...passed 00:07:37.639 Test: blockdev write read size > 128k ...uccessful. 00:07:37.639 passed 00:07:37.639 Test: blockdev write read invalid size ...passed 00:07:37.639 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.639 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.639 Test: blockdev write read max offset ...passed 00:07:37.639 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.639 Test: blockdev writev readv 8 blocks ...passed 00:07:37.639 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.639 Test: blockdev writev readv block ...passed 00:07:37.639 Test: blockdev writev readv size > 128k ...passed 00:07:37.639 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.639 Test: blockdev comparev and writev ...[2024-12-11 13:48:30.533211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b380e000 len:0x1000 00:07:37.639 [2024-12-11 13:48:30.533254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.639 passed 00:07:37.639 Test: blockdev nvme passthru rw ...passed 00:07:37.639 Test: blockdev nvme passthru vendor specific ...passed 00:07:37.639 Test: blockdev nvme admin passthru ...passed 00:07:37.639 Test: blockdev copy ...passed 00:07:37.639 Suite: bdevio tests on: Nvme0n1 00:07:37.639 Test: blockdev write read block ...passed 00:07:37.639 Test: blockdev write zeroes read block ...passed 00:07:37.639 Test: blockdev write zeroes read no split ...passed 00:07:37.639 Test: blockdev write zeroes read split ...passed 00:07:37.639 Test: blockdev write zeroes read split partial ...passed 00:07:37.639 Test: blockdev reset ...[2024-12-11 13:48:30.603522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:37.639 [2024-12-11 13:48:30.607476] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:37.639 passed 00:07:37.639 Test: blockdev write read 8 blocks ...passed 00:07:37.639 Test: blockdev write read size > 128k ...passed 00:07:37.639 Test: blockdev write read invalid size ...passed 00:07:37.639 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.639 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.639 Test: blockdev write read max offset ...passed 00:07:37.639 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.639 Test: blockdev writev readv 8 blocks ...passed 00:07:37.639 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.639 Test: blockdev writev readv block ...passed 00:07:37.639 Test: blockdev writev readv size > 128k ...passed 00:07:37.639 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.639 Test: blockdev comparev and writev ...[2024-12-11 13:48:30.616846] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:37.639 separate metadata which is not supported yet. 00:07:37.639 passed 00:07:37.639 Test: blockdev nvme passthru rw ...passed 00:07:37.639 Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:48:30.618035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:37.639 [2024-12-11 13:48:30.618185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:37.639 passed 00:07:37.639 Test: blockdev nvme admin passthru ...passed 00:07:37.639 Test: blockdev copy ...passed 00:07:37.639 00:07:37.639 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.639 suites 7 7 n/a 0 0 00:07:37.639 tests 161 161 161 0 0 00:07:37.639 asserts 1025 1025 1025 0 n/a 00:07:37.639 00:07:37.639 Elapsed time = 1.697 seconds 00:07:37.639 0 00:07:37.639 13:48:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63687 00:07:37.639 13:48:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63687 ']' 00:07:37.639 13:48:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63687 00:07:37.639 13:48:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:37.639 13:48:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.639 13:48:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63687 00:07:37.899 killing process with pid 63687 00:07:37.899 13:48:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.899 13:48:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.899 13:48:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63687' 00:07:37.899 13:48:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63687 00:07:37.899 13:48:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63687 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:38.838 00:07:38.838 real 0m2.969s 00:07:38.838 user 0m7.587s 00:07:38.838 sys 0m0.423s 00:07:38.838 ************************************ 00:07:38.838 END TEST bdev_bounds 00:07:38.838 ************************************ 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:38.838 13:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:38.838 13:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.838 13:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.838 13:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.838 ************************************ 00:07:38.838 START TEST bdev_nbd 00:07:38.838 ************************************ 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63752 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:38.838 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:38.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:38.839 13:48:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63752 /var/tmp/spdk-nbd.sock 00:07:38.839 13:48:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63752 ']' 00:07:38.839 13:48:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:38.839 13:48:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.839 13:48:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:38.839 13:48:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.839 13:48:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:39.096 [2024-12-11 13:48:31.944549] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:39.096 [2024-12-11 13:48:31.944871] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.096 [2024-12-11 13:48:32.126325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.355 [2024-12-11 13:48:32.244958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.922 13:48:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.922 13:48:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:39.922 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:39.922 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.922 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.922 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:39.922 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:39.922 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.923 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.923 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:39.923 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:39.923 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:39.923 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:39.923 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:39.923 13:48:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:40.180 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:40.180 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:40.180 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:40.180 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.181 1+0 records in 00:07:40.181 1+0 records out 00:07:40.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603383 s, 6.8 MB/s 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.181 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.439 1+0 records in 00:07:40.439 1+0 records out 00:07:40.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756395 s, 5.4 MB/s 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.439 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.697 1+0 records in 00:07:40.697 1+0 records out 00:07:40.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00223512 s, 1.8 MB/s 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.697 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.956 1+0 records in 00:07:40.956 1+0 records out 00:07:40.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756382 s, 5.4 MB/s 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.956 13:48:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.215 1+0 records in 00:07:41.215 1+0 records out 00:07:41.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604624 s, 6.8 MB/s 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.215 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.474 1+0 records in 00:07:41.474 1+0 records out 00:07:41.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084268 s, 4.9 MB/s 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.474 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.733 1+0 records in 00:07:41.733 1+0 records out 00:07:41.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590072 s, 6.9 MB/s 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.733 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.992 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd0", 00:07:41.992 "bdev_name": "Nvme0n1" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd1", 00:07:41.992 "bdev_name": "Nvme1n1p1" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd2", 00:07:41.992 "bdev_name": "Nvme1n1p2" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd3", 00:07:41.992 "bdev_name": "Nvme2n1" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd4", 00:07:41.992 "bdev_name": "Nvme2n2" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd5", 00:07:41.992 "bdev_name": "Nvme2n3" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd6", 00:07:41.992 "bdev_name": "Nvme3n1" 00:07:41.992 } 00:07:41.992 ]' 00:07:41.992 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:41.992 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd0", 00:07:41.992 "bdev_name": "Nvme0n1" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd1", 00:07:41.992 "bdev_name": "Nvme1n1p1" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd2", 00:07:41.992 "bdev_name": "Nvme1n1p2" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd3", 00:07:41.992 "bdev_name": "Nvme2n1" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd4", 00:07:41.992 "bdev_name": "Nvme2n2" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd5", 00:07:41.992 "bdev_name": "Nvme2n3" 00:07:41.992 }, 00:07:41.992 { 00:07:41.992 "nbd_device": "/dev/nbd6", 00:07:41.992 "bdev_name": "Nvme3n1" 00:07:41.992 } 00:07:41.992 ]' 00:07:41.992 13:48:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:41.992 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:41.992 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.992 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:41.992 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.992 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:41.992 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.992 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.258 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.258 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.258 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.258 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.258 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.258 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.258 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.258 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.258 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.258 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:42.516 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:42.516 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:42.516 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:42.516 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.516 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.516 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:42.516 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.516 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.516 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.516 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:42.775 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:42.775 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:42.775 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:42.775 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.775 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.775 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:42.775 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.775 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.775 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.775 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:43.034 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:43.034 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:43.034 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:43.034 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.034 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.034 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:43.034 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.034 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.034 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.034 13:48:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.293 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.553 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:43.812 13:48:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:44.071 /dev/nbd0 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.071 1+0 records in 00:07:44.071 1+0 records out 00:07:44.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744016 s, 5.5 MB/s 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.071 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.072 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.072 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.072 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:44.331 /dev/nbd1 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.331 1+0 records in 00:07:44.331 1+0 records out 00:07:44.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723062 s, 5.7 MB/s 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.331 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:44.590 /dev/nbd10 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.590 1+0 records in 00:07:44.590 1+0 records out 00:07:44.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518365 s, 7.9 MB/s 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.590 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:44.849 /dev/nbd11 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.849 1+0 records in 00:07:44.849 1+0 records out 00:07:44.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010023 s, 4.1 MB/s 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.849 13:48:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:45.108 /dev/nbd12 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.108 1+0 records in 00:07:45.108 1+0 records out 00:07:45.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737389 s, 5.6 MB/s 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:45.108 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:45.368 /dev/nbd13 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.368 1+0 records in 00:07:45.368 1+0 records out 00:07:45.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102278 s, 4.0 MB/s 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:45.368 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:45.627 /dev/nbd14 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.627 1+0 records in 00:07:45.627 1+0 records out 00:07:45.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811481 s, 5.0 MB/s 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.627 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.895 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd0", 00:07:45.895 "bdev_name": "Nvme0n1" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd1", 00:07:45.895 "bdev_name": "Nvme1n1p1" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd10", 00:07:45.895 "bdev_name": "Nvme1n1p2" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd11", 00:07:45.895 "bdev_name": "Nvme2n1" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd12", 00:07:45.895 "bdev_name": "Nvme2n2" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd13", 00:07:45.895 "bdev_name": "Nvme2n3" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd14", 00:07:45.895 "bdev_name": "Nvme3n1" 00:07:45.895 } 00:07:45.895 ]' 00:07:45.895 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.895 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd0", 00:07:45.895 "bdev_name": "Nvme0n1" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd1", 00:07:45.895 "bdev_name": "Nvme1n1p1" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd10", 00:07:45.895 "bdev_name": "Nvme1n1p2" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd11", 00:07:45.895 "bdev_name": "Nvme2n1" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd12", 00:07:45.895 "bdev_name": "Nvme2n2" 00:07:45.895 }, 00:07:45.895 { 00:07:45.895 "nbd_device": "/dev/nbd13", 00:07:45.895 "bdev_name": "Nvme2n3" 00:07:45.895 }, 00:07:45.895 { 00:07:45.896 "nbd_device": "/dev/nbd14", 00:07:45.896 "bdev_name": "Nvme3n1" 00:07:45.896 } 00:07:45.896 ]' 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:45.896 /dev/nbd1 00:07:45.896 /dev/nbd10 00:07:45.896 /dev/nbd11 00:07:45.896 /dev/nbd12 00:07:45.896 /dev/nbd13 00:07:45.896 /dev/nbd14' 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:45.896 /dev/nbd1 00:07:45.896 /dev/nbd10 00:07:45.896 /dev/nbd11 00:07:45.896 /dev/nbd12 00:07:45.896 /dev/nbd13 00:07:45.896 /dev/nbd14' 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:45.896 256+0 records in 00:07:45.896 256+0 records out 00:07:45.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129419 s, 81.0 MB/s 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.896 13:48:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:46.163 256+0 records in 00:07:46.163 256+0 records out 00:07:46.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136449 s, 7.7 MB/s 00:07:46.163 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.163 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:46.163 256+0 records in 00:07:46.163 256+0 records out 00:07:46.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144034 s, 7.3 MB/s 00:07:46.163 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.163 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:46.422 256+0 records in 00:07:46.422 256+0 records out 00:07:46.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143138 s, 7.3 MB/s 00:07:46.422 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.422 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:46.680 256+0 records in 00:07:46.680 256+0 records out 00:07:46.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146089 s, 7.2 MB/s 00:07:46.680 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.680 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:46.680 256+0 records in 00:07:46.680 256+0 records out 00:07:46.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141232 s, 7.4 MB/s 00:07:46.680 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.680 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:46.939 256+0 records in 00:07:46.939 256+0 records out 00:07:46.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143145 s, 7.3 MB/s 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:46.939 256+0 records in 00:07:46.939 256+0 records out 00:07:46.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146866 s, 7.1 MB/s 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:46.939 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.940 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:47.198 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.198 13:48:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.198 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.457 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:47.716 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:47.716 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:47.716 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:47.716 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.716 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.716 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:47.716 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.716 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.716 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.716 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:47.975 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:47.975 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:47.975 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:47.975 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.975 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.975 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:47.975 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.975 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.975 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.975 13:48:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:48.234 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:48.234 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:48.234 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:48.234 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.234 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.234 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:48.234 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.234 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.234 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.234 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:48.493 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:48.493 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:48.493 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:48.493 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.493 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.493 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:48.493 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.493 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.493 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.493 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.752 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:49.011 13:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:49.270 malloc_lvol_verify 00:07:49.270 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:49.270 45500d4f-3a19-45ff-a81c-289730fb4d22 00:07:49.529 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:49.529 3be55c5a-0a95-47a2-9424-a3e7e4d8b9c2 00:07:49.529 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:49.788 /dev/nbd0 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:49.788 mke2fs 1.47.0 (5-Feb-2023) 00:07:49.788 Discarding device blocks: 0/4096 done 00:07:49.788 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:49.788 00:07:49.788 Allocating group tables: 0/1 done 00:07:49.788 Writing inode tables: 0/1 done 00:07:49.788 Creating journal (1024 blocks): done 00:07:49.788 Writing superblocks and filesystem accounting information: 0/1 done 00:07:49.788 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.788 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63752 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63752 ']' 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63752 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.048 13:48:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63752 00:07:50.048 killing process with pid 63752 00:07:50.048 13:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.048 13:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.048 13:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63752' 00:07:50.048 13:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63752 00:07:50.048 13:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63752 00:07:51.427 ************************************ 00:07:51.427 END TEST bdev_nbd 00:07:51.427 ************************************ 00:07:51.427 13:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:51.427 00:07:51.427 real 0m12.409s 00:07:51.427 user 0m16.063s 00:07:51.427 sys 0m5.181s 00:07:51.427 13:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.427 13:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:51.427 13:48:44 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:51.427 13:48:44 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:51.427 13:48:44 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:51.427 skipping fio tests on NVMe due to multi-ns failures. 00:07:51.427 13:48:44 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:51.427 13:48:44 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:51.428 13:48:44 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:51.428 13:48:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:51.428 13:48:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.428 13:48:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.428 ************************************ 00:07:51.428 START TEST bdev_verify 00:07:51.428 ************************************ 00:07:51.428 13:48:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:51.428 [2024-12-11 13:48:44.419186] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:51.428 [2024-12-11 13:48:44.419317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64180 ] 00:07:51.686 [2024-12-11 13:48:44.595412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.686 [2024-12-11 13:48:44.713428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.686 [2024-12-11 13:48:44.713458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.623 Running I/O for 5 seconds... 00:07:54.566 21760.00 IOPS, 85.00 MiB/s [2024-12-11T13:48:48.988Z] 21792.00 IOPS, 85.12 MiB/s [2024-12-11T13:48:49.923Z] 22037.33 IOPS, 86.08 MiB/s [2024-12-11T13:48:50.859Z] 21600.00 IOPS, 84.38 MiB/s [2024-12-11T13:48:50.859Z] 21683.20 IOPS, 84.70 MiB/s 00:07:57.812 Latency(us) 00:07:57.812 [2024-12-11T13:48:50.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.812 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.812 Verification LBA range: start 0x0 length 0xbd0bd 00:07:57.813 Nvme0n1 : 5.06 1555.87 6.08 0.00 0.00 81877.54 11106.90 93908.61 00:07:57.813 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:57.813 Nvme0n1 : 5.04 1496.95 5.85 0.00 0.00 85200.10 20002.96 90960.81 00:07:57.813 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x0 length 0x4ff80 00:07:57.813 Nvme1n1p1 : 5.08 1562.46 6.10 0.00 0.00 81492.33 16212.92 87591.89 00:07:57.813 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:57.813 Nvme1n1p1 : 5.05 1496.52 5.85 0.00 0.00 84990.17 20739.91 84222.97 00:07:57.813 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x0 length 0x4ff7f 00:07:57.813 Nvme1n1p2 : 5.08 1562.05 6.10 0.00 0.00 81288.76 16528.76 68641.72 00:07:57.813 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:57.813 Nvme1n1p2 : 5.09 1509.80 5.90 0.00 0.00 84192.67 11738.58 80011.82 00:07:57.813 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x0 length 0x80000 00:07:57.813 Nvme2n1 : 5.08 1561.70 6.10 0.00 0.00 81148.54 16107.64 61482.77 00:07:57.813 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x80000 length 0x80000 00:07:57.813 Nvme2n1 : 5.09 1509.48 5.90 0.00 0.00 84082.94 11738.58 79169.59 00:07:57.813 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x0 length 0x80000 00:07:57.813 Nvme2n2 : 5.08 1561.35 6.10 0.00 0.00 81031.08 15581.25 59377.20 00:07:57.813 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x80000 length 0x80000 00:07:57.813 Nvme2n2 : 5.09 1509.16 5.90 0.00 0.00 83959.44 11580.66 77064.02 00:07:57.813 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x0 length 0x80000 00:07:57.813 Nvme2n3 : 5.08 1560.90 6.10 0.00 0.00 80902.10 14949.58 61482.77 00:07:57.813 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x80000 length 0x80000 00:07:57.813 Nvme2n3 : 5.09 1508.81 5.89 0.00 0.00 83821.93 11475.38 80854.05 00:07:57.813 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x0 length 0x20000 00:07:57.813 Nvme3n1 : 5.09 1560.46 6.10 0.00 0.00 80788.33 14317.91 63588.34 00:07:57.813 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.813 Verification LBA range: start 0x20000 length 0x20000 00:07:57.813 Nvme3n1 : 5.09 1508.48 5.89 0.00 0.00 83680.09 11422.74 82959.63 00:07:57.813 [2024-12-11T13:48:50.860Z] =================================================================================================================== 00:07:57.813 [2024-12-11T13:48:50.860Z] Total : 21463.97 83.84 0.00 0.00 82716.72 11106.90 93908.61 00:07:59.189 00:07:59.189 real 0m7.598s 00:07:59.189 user 0m14.025s 00:07:59.189 sys 0m0.317s 00:07:59.189 ************************************ 00:07:59.189 END TEST bdev_verify 00:07:59.189 ************************************ 00:07:59.189 13:48:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.189 13:48:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:59.189 13:48:51 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.189 13:48:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:59.189 13:48:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.189 13:48:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:59.189 ************************************ 00:07:59.189 START TEST bdev_verify_big_io 00:07:59.189 ************************************ 00:07:59.189 13:48:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.189 [2024-12-11 13:48:52.087671] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:59.189 [2024-12-11 13:48:52.087781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64285 ] 00:07:59.447 [2024-12-11 13:48:52.266268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.447 [2024-12-11 13:48:52.376485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.447 [2024-12-11 13:48:52.376514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.383 Running I/O for 5 seconds... 00:08:05.449 2174.00 IOPS, 135.88 MiB/s [2024-12-11T13:48:59.064Z] 3345.50 IOPS, 209.09 MiB/s [2024-12-11T13:48:59.064Z] 3838.00 IOPS, 239.88 MiB/s 00:08:06.017 Latency(us) 00:08:06.017 [2024-12-11T13:48:59.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.017 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x0 length 0xbd0b 00:08:06.017 Nvme0n1 : 5.67 139.04 8.69 0.00 0.00 889043.12 26530.24 889394.58 00:08:06.017 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:06.017 Nvme0n1 : 5.67 137.03 8.56 0.00 0.00 895831.96 19687.12 889394.58 00:08:06.017 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x0 length 0x4ff8 00:08:06.017 Nvme1n1p1 : 5.62 140.32 8.77 0.00 0.00 862784.29 72852.87 811909.45 00:08:06.017 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:06.017 Nvme1n1p1 : 5.67 146.64 9.16 0.00 0.00 830946.17 74958.44 761375.67 00:08:06.017 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x0 length 0x4ff7 00:08:06.017 Nvme1n1p2 : 5.68 145.98 9.12 0.00 0.00 820966.21 52849.91 889394.58 00:08:06.017 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:06.017 Nvme1n1p2 : 5.72 145.21 9.08 0.00 0.00 813490.78 80854.05 758006.75 00:08:06.017 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x0 length 0x8000 00:08:06.017 Nvme2n1 : 5.68 146.28 9.14 0.00 0.00 800392.90 53902.70 909608.10 00:08:06.017 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x8000 length 0x8000 00:08:06.017 Nvme2n1 : 5.75 142.58 8.91 0.00 0.00 818204.35 46112.08 1435159.44 00:08:06.017 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x0 length 0x8000 00:08:06.017 Nvme2n2 : 5.73 151.70 9.48 0.00 0.00 755594.05 26003.84 791695.94 00:08:06.017 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x8000 length 0x8000 00:08:06.017 Nvme2n2 : 5.75 147.22 9.20 0.00 0.00 777685.54 26635.51 1455372.95 00:08:06.017 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x0 length 0x8000 00:08:06.017 Nvme2n3 : 5.77 156.07 9.75 0.00 0.00 716890.03 25793.29 929821.61 00:08:06.017 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x8000 length 0x8000 00:08:06.017 Nvme2n3 : 5.78 152.17 9.51 0.00 0.00 735086.29 20529.35 1482324.31 00:08:06.017 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x0 length 0x2000 00:08:06.017 Nvme3n1 : 5.79 172.44 10.78 0.00 0.00 637321.72 9527.72 838860.80 00:08:06.017 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.017 Verification LBA range: start 0x2000 length 0x2000 00:08:06.017 Nvme3n1 : 5.81 168.07 10.50 0.00 0.00 651050.23 7790.62 1516013.49 00:08:06.017 [2024-12-11T13:48:59.064Z] =================================================================================================================== 00:08:06.017 [2024-12-11T13:48:59.064Z] Total : 2090.75 130.67 0.00 0.00 780496.63 7790.62 1516013.49 00:08:07.924 ************************************ 00:08:07.924 END TEST bdev_verify_big_io 00:08:07.924 ************************************ 00:08:07.924 00:08:07.924 real 0m8.913s 00:08:07.924 user 0m16.674s 00:08:07.924 sys 0m0.329s 00:08:07.924 13:49:00 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.924 13:49:00 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:07.924 13:49:00 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.924 13:49:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:07.924 13:49:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.924 13:49:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.183 ************************************ 00:08:08.183 START TEST bdev_write_zeroes 00:08:08.183 ************************************ 00:08:08.183 13:49:00 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.183 [2024-12-11 13:49:01.070815] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:08.183 [2024-12-11 13:49:01.070935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64400 ] 00:08:08.443 [2024-12-11 13:49:01.249676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.443 [2024-12-11 13:49:01.350211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.010 Running I/O for 1 seconds... 00:08:10.381 70336.00 IOPS, 274.75 MiB/s 00:08:10.381 Latency(us) 00:08:10.381 [2024-12-11T13:49:03.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.381 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.381 Nvme0n1 : 1.03 9986.71 39.01 0.00 0.00 12790.78 11264.82 25582.73 00:08:10.381 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.381 Nvme1n1p1 : 1.03 9976.65 38.97 0.00 0.00 12787.25 11264.82 25793.29 00:08:10.381 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.381 Nvme1n1p2 : 1.03 9967.16 38.93 0.00 0.00 12767.55 11159.54 25266.89 00:08:10.381 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.381 Nvme2n1 : 1.03 9958.16 38.90 0.00 0.00 12721.86 11370.10 21266.30 00:08:10.381 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.381 Nvme2n2 : 1.03 9949.26 38.86 0.00 0.00 12714.87 11159.54 21161.02 00:08:10.381 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.381 Nvme2n3 : 1.03 9940.42 38.83 0.00 0.00 12673.70 9948.84 21476.86 00:08:10.381 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.381 Nvme3n1 : 1.03 9931.73 38.80 0.00 0.00 12661.84 8632.85 22740.20 00:08:10.381 [2024-12-11T13:49:03.428Z] =================================================================================================================== 00:08:10.381 [2024-12-11T13:49:03.428Z] Total : 69710.08 272.31 0.00 0.00 12731.12 8632.85 25793.29 00:08:11.315 00:08:11.315 real 0m3.193s 00:08:11.315 user 0m2.803s 00:08:11.315 sys 0m0.277s 00:08:11.315 13:49:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.315 13:49:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:11.315 ************************************ 00:08:11.315 END TEST bdev_write_zeroes 00:08:11.315 ************************************ 00:08:11.315 13:49:04 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.315 13:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:11.315 13:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.315 13:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.315 ************************************ 00:08:11.315 START TEST bdev_json_nonenclosed 00:08:11.315 ************************************ 00:08:11.315 13:49:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.315 [2024-12-11 13:49:04.345077] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:11.315 [2024-12-11 13:49:04.345189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64453 ] 00:08:11.573 [2024-12-11 13:49:04.525016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.832 [2024-12-11 13:49:04.628988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.832 [2024-12-11 13:49:04.629090] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:11.832 [2024-12-11 13:49:04.629112] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:11.832 [2024-12-11 13:49:04.629124] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.832 00:08:11.832 real 0m0.617s 00:08:11.832 user 0m0.371s 00:08:11.832 sys 0m0.141s 00:08:11.832 13:49:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.832 13:49:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:11.832 ************************************ 00:08:11.832 END TEST bdev_json_nonenclosed 00:08:11.832 ************************************ 00:08:12.092 13:49:04 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:12.092 13:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:12.092 13:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.092 13:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:12.092 ************************************ 00:08:12.092 START TEST bdev_json_nonarray 00:08:12.092 ************************************ 00:08:12.092 13:49:04 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:12.092 [2024-12-11 13:49:05.039687] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:12.092 [2024-12-11 13:49:05.039799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64473 ] 00:08:12.350 [2024-12-11 13:49:05.219706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.351 [2024-12-11 13:49:05.320303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.351 [2024-12-11 13:49:05.320392] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:12.351 [2024-12-11 13:49:05.320413] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:12.351 [2024-12-11 13:49:05.320425] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.609 00:08:12.609 real 0m0.608s 00:08:12.609 user 0m0.372s 00:08:12.609 sys 0m0.131s 00:08:12.609 ************************************ 00:08:12.609 END TEST bdev_json_nonarray 00:08:12.609 ************************************ 00:08:12.609 13:49:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.609 13:49:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:12.609 13:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:12.610 13:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:12.610 13:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:12.610 13:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.610 13:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.610 13:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:12.610 ************************************ 00:08:12.610 START TEST bdev_gpt_uuid 00:08:12.610 ************************************ 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64504 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64504 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64504 ']' 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.610 13:49:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:12.868 [2024-12-11 13:49:05.737373] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:12.868 [2024-12-11 13:49:05.737505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64504 ] 00:08:13.127 [2024-12-11 13:49:05.917678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.127 [2024-12-11 13:49:06.023079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.064 13:49:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.064 13:49:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:14.064 13:49:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:14.064 13:49:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.064 13:49:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:14.323 Some configs were skipped because the RPC state that can call them passed over. 00:08:14.323 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.323 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:14.323 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:14.324 { 00:08:14.324 "name": "Nvme1n1p1", 00:08:14.324 "aliases": [ 00:08:14.324 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:14.324 ], 00:08:14.324 "product_name": "GPT Disk", 00:08:14.324 "block_size": 4096, 00:08:14.324 "num_blocks": 655104, 00:08:14.324 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:14.324 "assigned_rate_limits": { 00:08:14.324 "rw_ios_per_sec": 0, 00:08:14.324 "rw_mbytes_per_sec": 0, 00:08:14.324 "r_mbytes_per_sec": 0, 00:08:14.324 "w_mbytes_per_sec": 0 00:08:14.324 }, 00:08:14.324 "claimed": false, 00:08:14.324 "zoned": false, 00:08:14.324 "supported_io_types": { 00:08:14.324 "read": true, 00:08:14.324 "write": true, 00:08:14.324 "unmap": true, 00:08:14.324 "flush": true, 00:08:14.324 "reset": true, 00:08:14.324 "nvme_admin": false, 00:08:14.324 "nvme_io": false, 00:08:14.324 "nvme_io_md": false, 00:08:14.324 "write_zeroes": true, 00:08:14.324 "zcopy": false, 00:08:14.324 "get_zone_info": false, 00:08:14.324 "zone_management": false, 00:08:14.324 "zone_append": false, 00:08:14.324 "compare": true, 00:08:14.324 "compare_and_write": false, 00:08:14.324 "abort": true, 00:08:14.324 "seek_hole": false, 00:08:14.324 "seek_data": false, 00:08:14.324 "copy": true, 00:08:14.324 "nvme_iov_md": false 00:08:14.324 }, 00:08:14.324 "driver_specific": { 00:08:14.324 "gpt": { 00:08:14.324 "base_bdev": "Nvme1n1", 00:08:14.324 "offset_blocks": 256, 00:08:14.324 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:14.324 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:14.324 "partition_name": "SPDK_TEST_first" 00:08:14.324 } 00:08:14.324 } 00:08:14.324 } 00:08:14.324 ]' 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.324 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:14.583 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.583 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:14.583 { 00:08:14.583 "name": "Nvme1n1p2", 00:08:14.583 "aliases": [ 00:08:14.583 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:14.583 ], 00:08:14.583 "product_name": "GPT Disk", 00:08:14.583 "block_size": 4096, 00:08:14.583 "num_blocks": 655103, 00:08:14.583 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:14.583 "assigned_rate_limits": { 00:08:14.583 "rw_ios_per_sec": 0, 00:08:14.583 "rw_mbytes_per_sec": 0, 00:08:14.583 "r_mbytes_per_sec": 0, 00:08:14.583 "w_mbytes_per_sec": 0 00:08:14.583 }, 00:08:14.583 "claimed": false, 00:08:14.583 "zoned": false, 00:08:14.583 "supported_io_types": { 00:08:14.583 "read": true, 00:08:14.583 "write": true, 00:08:14.583 "unmap": true, 00:08:14.583 "flush": true, 00:08:14.583 "reset": true, 00:08:14.583 "nvme_admin": false, 00:08:14.583 "nvme_io": false, 00:08:14.583 "nvme_io_md": false, 00:08:14.583 "write_zeroes": true, 00:08:14.583 "zcopy": false, 00:08:14.583 "get_zone_info": false, 00:08:14.583 "zone_management": false, 00:08:14.583 "zone_append": false, 00:08:14.583 "compare": true, 00:08:14.583 "compare_and_write": false, 00:08:14.583 "abort": true, 00:08:14.583 "seek_hole": false, 00:08:14.583 "seek_data": false, 00:08:14.583 "copy": true, 00:08:14.583 "nvme_iov_md": false 00:08:14.583 }, 00:08:14.583 "driver_specific": { 00:08:14.583 "gpt": { 00:08:14.583 "base_bdev": "Nvme1n1", 00:08:14.583 "offset_blocks": 655360, 00:08:14.583 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:14.583 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:14.583 "partition_name": "SPDK_TEST_second" 00:08:14.583 } 00:08:14.583 } 00:08:14.584 } 00:08:14.584 ]' 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 64504 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64504 ']' 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64504 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64504 00:08:14.584 killing process with pid 64504 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64504' 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64504 00:08:14.584 13:49:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64504 00:08:17.120 00:08:17.120 real 0m4.214s 00:08:17.120 user 0m4.308s 00:08:17.120 sys 0m0.546s 00:08:17.120 13:49:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.120 13:49:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:17.120 ************************************ 00:08:17.120 END TEST bdev_gpt_uuid 00:08:17.120 ************************************ 00:08:17.120 13:49:09 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:17.120 13:49:09 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:17.120 13:49:09 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:17.120 13:49:09 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:17.120 13:49:09 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:17.120 13:49:09 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:17.120 13:49:09 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:17.120 13:49:09 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:17.120 13:49:09 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:17.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:17.947 Waiting for block devices as requested 00:08:17.947 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:17.947 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:18.206 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:18.206 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:23.504 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:23.504 13:49:16 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:23.504 13:49:16 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:23.504 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:23.504 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:23.504 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:23.504 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:23.504 13:49:16 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:23.504 00:08:23.504 real 1m4.665s 00:08:23.504 user 1m20.323s 00:08:23.504 sys 0m12.036s 00:08:23.504 13:49:16 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.504 13:49:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:23.504 ************************************ 00:08:23.504 END TEST blockdev_nvme_gpt 00:08:23.504 ************************************ 00:08:23.763 13:49:16 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:23.763 13:49:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.763 13:49:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.763 13:49:16 -- common/autotest_common.sh@10 -- # set +x 00:08:23.763 ************************************ 00:08:23.763 START TEST nvme 00:08:23.763 ************************************ 00:08:23.763 13:49:16 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:23.763 * Looking for test storage... 00:08:23.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:23.763 13:49:16 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:23.763 13:49:16 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:08:23.763 13:49:16 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:23.763 13:49:16 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:23.763 13:49:16 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.763 13:49:16 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.763 13:49:16 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.763 13:49:16 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.763 13:49:16 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.763 13:49:16 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.763 13:49:16 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.763 13:49:16 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.763 13:49:16 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.763 13:49:16 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.763 13:49:16 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.763 13:49:16 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:23.763 13:49:16 nvme -- scripts/common.sh@345 -- # : 1 00:08:23.763 13:49:16 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.763 13:49:16 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.763 13:49:16 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:23.763 13:49:16 nvme -- scripts/common.sh@353 -- # local d=1 00:08:23.763 13:49:16 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.763 13:49:16 nvme -- scripts/common.sh@355 -- # echo 1 00:08:23.763 13:49:16 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.763 13:49:16 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:23.763 13:49:16 nvme -- scripts/common.sh@353 -- # local d=2 00:08:23.763 13:49:16 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.763 13:49:16 nvme -- scripts/common.sh@355 -- # echo 2 00:08:23.763 13:49:16 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.763 13:49:16 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.763 13:49:16 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.763 13:49:16 nvme -- scripts/common.sh@368 -- # return 0 00:08:23.763 13:49:16 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.763 13:49:16 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:23.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.763 --rc genhtml_branch_coverage=1 00:08:23.763 --rc genhtml_function_coverage=1 00:08:23.763 --rc genhtml_legend=1 00:08:23.763 --rc geninfo_all_blocks=1 00:08:23.763 --rc geninfo_unexecuted_blocks=1 00:08:23.763 00:08:23.763 ' 00:08:23.763 13:49:16 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:23.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.763 --rc genhtml_branch_coverage=1 00:08:23.763 --rc genhtml_function_coverage=1 00:08:23.763 --rc genhtml_legend=1 00:08:23.763 --rc geninfo_all_blocks=1 00:08:23.763 --rc geninfo_unexecuted_blocks=1 00:08:23.763 00:08:23.763 ' 00:08:23.763 13:49:16 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:23.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.763 --rc genhtml_branch_coverage=1 00:08:23.763 --rc genhtml_function_coverage=1 00:08:23.763 --rc genhtml_legend=1 00:08:23.763 --rc geninfo_all_blocks=1 00:08:23.763 --rc geninfo_unexecuted_blocks=1 00:08:23.763 00:08:23.763 ' 00:08:23.763 13:49:16 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:23.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.763 --rc genhtml_branch_coverage=1 00:08:23.763 --rc genhtml_function_coverage=1 00:08:23.763 --rc genhtml_legend=1 00:08:23.763 --rc geninfo_all_blocks=1 00:08:23.763 --rc geninfo_unexecuted_blocks=1 00:08:23.763 00:08:23.763 ' 00:08:23.763 13:49:16 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:24.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:25.268 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:25.268 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:25.268 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:25.268 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:25.527 13:49:18 nvme -- nvme/nvme.sh@79 -- # uname 00:08:25.527 13:49:18 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:25.527 13:49:18 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:25.527 13:49:18 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:25.527 13:49:18 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:25.527 13:49:18 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:25.527 13:49:18 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:25.527 Waiting for stub to ready for secondary processes... 00:08:25.527 13:49:18 nvme -- common/autotest_common.sh@1075 -- # stubpid=65163 00:08:25.527 13:49:18 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:25.527 13:49:18 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:25.527 13:49:18 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:25.527 13:49:18 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/65163 ]] 00:08:25.527 13:49:18 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:25.527 [2024-12-11 13:49:18.464903] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:25.527 [2024-12-11 13:49:18.465041] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:26.464 13:49:19 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:26.464 13:49:19 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/65163 ]] 00:08:26.464 13:49:19 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:26.464 [2024-12-11 13:49:19.505525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:26.724 [2024-12-11 13:49:19.612229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.724 [2024-12-11 13:49:19.612372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.724 [2024-12-11 13:49:19.612403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.724 [2024-12-11 13:49:19.630132] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:26.724 [2024-12-11 13:49:19.630334] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:26.724 [2024-12-11 13:49:19.644921] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:26.724 [2024-12-11 13:49:19.645236] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:26.724 [2024-12-11 13:49:19.648612] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:26.724 [2024-12-11 13:49:19.649008] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:26.724 [2024-12-11 13:49:19.649131] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:26.724 [2024-12-11 13:49:19.652249] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:26.724 [2024-12-11 13:49:19.652566] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:26.724 [2024-12-11 13:49:19.652792] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:26.724 [2024-12-11 13:49:19.655771] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:26.724 [2024-12-11 13:49:19.656166] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:26.724 [2024-12-11 13:49:19.656370] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:26.724 [2024-12-11 13:49:19.656472] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:26.724 [2024-12-11 13:49:19.656638] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:27.661 13:49:20 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:27.661 done. 00:08:27.661 13:49:20 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:27.662 13:49:20 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:27.662 13:49:20 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:27.662 13:49:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.662 13:49:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.662 ************************************ 00:08:27.662 START TEST nvme_reset 00:08:27.662 ************************************ 00:08:27.662 13:49:20 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:27.921 Initializing NVMe Controllers 00:08:27.921 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:27.921 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:27.921 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:27.921 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:27.921 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:27.921 ************************************ 00:08:27.921 END TEST nvme_reset 00:08:27.921 ************************************ 00:08:27.921 00:08:27.921 real 0m0.295s 00:08:27.921 user 0m0.103s 00:08:27.921 sys 0m0.150s 00:08:27.921 13:49:20 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.921 13:49:20 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:27.921 13:49:20 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:27.921 13:49:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.921 13:49:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.921 13:49:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.921 ************************************ 00:08:27.921 START TEST nvme_identify 00:08:27.921 ************************************ 00:08:27.921 13:49:20 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:27.921 13:49:20 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:27.921 13:49:20 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:27.921 13:49:20 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:27.921 13:49:20 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:27.921 13:49:20 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:27.921 13:49:20 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:27.921 13:49:20 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:27.921 13:49:20 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:27.921 13:49:20 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:27.921 13:49:20 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:27.921 13:49:20 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:27.921 13:49:20 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:28.183 [2024-12-11 13:49:21.162622] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 65196 termina===================================================== 00:08:28.183 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:28.183 ===================================================== 00:08:28.183 Controller Capabilities/Features 00:08:28.183 ================================ 00:08:28.183 Vendor ID: 1b36 00:08:28.183 Subsystem Vendor ID: 1af4 00:08:28.183 Serial Number: 12340 00:08:28.183 Model Number: QEMU NVMe Ctrl 00:08:28.183 Firmware Version: 8.0.0 00:08:28.183 Recommended Arb Burst: 6 00:08:28.183 IEEE OUI Identifier: 00 54 52 00:08:28.183 Multi-path I/O 00:08:28.183 May have multiple subsystem ports: No 00:08:28.183 May have multiple controllers: No 00:08:28.183 Associated with SR-IOV VF: No 00:08:28.183 Max Data Transfer Size: 524288 00:08:28.183 Max Number of Namespaces: 256 00:08:28.183 Max Number of I/O Queues: 64 00:08:28.183 NVMe Specification Version (VS): 1.4 00:08:28.183 NVMe Specification Version (Identify): 1.4 00:08:28.183 Maximum Queue Entries: 2048 00:08:28.183 Contiguous Queues Required: Yes 00:08:28.183 Arbitration Mechanisms Supported 00:08:28.183 Weighted Round Robin: Not Supported 00:08:28.183 Vendor Specific: Not Supported 00:08:28.183 Reset Timeout: 7500 ms 00:08:28.183 Doorbell Stride: 4 bytes 00:08:28.183 NVM Subsystem Reset: Not Supported 00:08:28.183 Command Sets Supported 00:08:28.183 NVM Command Set: Supported 00:08:28.183 Boot Partition: Not Supported 00:08:28.183 Memory Page Size Minimum: 4096 bytes 00:08:28.183 Memory Page Size Maximum: 65536 bytes 00:08:28.183 Persistent Memory Region: Not Supported 00:08:28.183 Optional Asynchronous Events Supported 00:08:28.183 Namespace Attribute Notices: Supported 00:08:28.183 Firmware Activation Notices: Not Supported 00:08:28.183 ANA Change Notices: Not Supported 00:08:28.183 PLE Aggregate Log Change Notices: Not Supported 00:08:28.183 LBA Status Info Alert Notices: Not Supported 00:08:28.183 EGE Aggregate Log Change Notices: Not Supported 00:08:28.183 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.183 Zone Descriptor Change Notices: Not Supported 00:08:28.183 Discovery Log Change Notices: Not Supported 00:08:28.183 Controller Attributes 00:08:28.183 128-bit Host Identifier: Not Supported 00:08:28.183 Non-Operational Permissive Mode: Not Supported 00:08:28.183 NVM Sets: Not Supported 00:08:28.183 Read Recovery Levels: Not Supported 00:08:28.183 Endurance Groups: Not Supported 00:08:28.183 Predictable Latency Mode: Not Supported 00:08:28.183 Traffic Based Keep ALive: Not Supported 00:08:28.183 Namespace Granularity: Not Supported 00:08:28.183 SQ Associations: Not Supported 00:08:28.183 UUID List: Not Supported 00:08:28.183 Multi-Domain Subsystem: Not Supported 00:08:28.183 Fixed Capacity Management: Not Supported 00:08:28.183 Variable Capacity Management: Not Supported 00:08:28.183 Delete Endurance Group: Not Supported 00:08:28.183 Delete NVM Set: Not Supported 00:08:28.183 Extended LBA Formats Supported: Supported 00:08:28.183 Flexible Data Placement Supported: Not Supported 00:08:28.183 00:08:28.183 Controller Memory Buffer Support 00:08:28.183 ================================ 00:08:28.183 Supported: No 00:08:28.183 00:08:28.183 Persistent Memory Region Support 00:08:28.183 ================================ 00:08:28.183 Supported: No 00:08:28.183 00:08:28.183 Admin Command Set Attributes 00:08:28.183 ============================ 00:08:28.183 Security Send/Receive: Not Supported 00:08:28.183 Format NVM: Supported 00:08:28.183 Firmware Activate/Download: Not Supported 00:08:28.183 Namespace Management: Supported 00:08:28.183 Device Self-Test: Not Supported 00:08:28.183 Directives: Supported 00:08:28.183 NVMe-MI: Not Supported 00:08:28.183 Virtualization Management: Not Supported 00:08:28.183 Doorbell Buffer Config: Supported 00:08:28.183 Get LBA Status Capability: Not Supported 00:08:28.183 Command & Feature Lockdown Capability: Not Supported 00:08:28.183 Abort Command Limit: 4 00:08:28.183 Async Event Request Limit: 4 00:08:28.183 Number of Firmware Slots: N/A 00:08:28.183 Firmware Slot 1 Read-Only: N/A 00:08:28.183 Firmware Activation Without Reset: N/A 00:08:28.183 Multiple Update Detection Support: N/A 00:08:28.183 Firmware Update Granularity: No Information Provided 00:08:28.183 Per-Namespace SMART Log: Yes 00:08:28.183 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.183 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:28.183 Command Effects Log Page: Supported 00:08:28.183 Get Log Page Extended Data: Supported 00:08:28.183 Telemetry Log Pages: Not Supported 00:08:28.183 Persistent Event Log Pages: Not Supported 00:08:28.183 Supported Log Pages Log Page: May Support 00:08:28.183 Commands Supported & Effects Log Page: Not Supported 00:08:28.183 Feature Identifiers & Effects Log Page:May Support 00:08:28.183 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.183 Data Area 4 for Telemetry Log: Not Supported 00:08:28.183 Error Log Page Entries Supported: 1 00:08:28.183 Keep Alive: Not Supported 00:08:28.183 00:08:28.183 NVM Command Set Attributes 00:08:28.183 ========================== 00:08:28.183 Submission Queue Entry Size 00:08:28.183 Max: 64 00:08:28.183 Min: 64 00:08:28.183 Completion Queue Entry Size 00:08:28.183 Max: 16 00:08:28.183 Min: 16 00:08:28.183 Number of Namespaces: 256 00:08:28.183 Compare Command: Supported 00:08:28.183 Write Uncorrectable Command: Not Supported 00:08:28.183 Dataset Management Command: Supported 00:08:28.183 Write Zeroes Command: Supported 00:08:28.183 Set Features Save Field: Supported 00:08:28.183 Reservations: Not Supported 00:08:28.183 Timestamp: Supported 00:08:28.183 Copy: Supported 00:08:28.183 Volatile Write Cache: Present 00:08:28.183 Atomic Write Unit (Normal): 1 00:08:28.183 Atomic Write Unit (PFail): 1 00:08:28.183 Atomic Compare & Write Unit: 1 00:08:28.183 Fused Compare & Write: Not Supported 00:08:28.183 Scatter-Gather List 00:08:28.183 SGL Command Set: Supported 00:08:28.183 SGL Keyed: Not Supported 00:08:28.183 SGL Bit Bucket Descriptor: Not Supported 00:08:28.183 SGL Metadata Pointer: Not Supported 00:08:28.183 Oversized SGL: Not Supported 00:08:28.183 SGL Metadata Address: Not Supported 00:08:28.183 SGL Offset: Not Supported 00:08:28.183 Transport SGL Data Block: Not Supported 00:08:28.183 Replay Protected Memory Block: Not Supported 00:08:28.183 00:08:28.183 Firmware Slot Information 00:08:28.183 ========================= 00:08:28.183 Active slot: 1 00:08:28.183 Slot 1 Firmware Revision: 1.0 00:08:28.183 00:08:28.183 00:08:28.183 Commands Supported and Effects 00:08:28.183 ============================== 00:08:28.183 Admin Commands 00:08:28.183 -------------- 00:08:28.183 Delete I/O Submission Queue (00h): Supported 00:08:28.183 Create I/O Submission Queue (01h): Supported 00:08:28.183 Get Log Page (02h): Supported 00:08:28.183 Delete I/O Completion Queue (04h): Supported 00:08:28.183 Create I/O Completion Queue (05h): Supported 00:08:28.183 Identify (06h): Supported 00:08:28.184 Abort (08h): Supported 00:08:28.184 Set Features (09h): Supported 00:08:28.184 Get Features (0Ah): Supported 00:08:28.184 Asynchronous Event Request (0Ch): Supported 00:08:28.184 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.184 Directive Send (19h): Supported 00:08:28.184 Directive Receive (1Ah): Supported 00:08:28.184 Virtualization Management (1Ch): Supported 00:08:28.184 Doorbell Buffer Config (7Ch): Supported 00:08:28.184 Format NVM (80h): Supported LBA-Change 00:08:28.184 I/O Commands 00:08:28.184 ------------ 00:08:28.184 Flush (00h): Supported LBA-Change 00:08:28.184 Write (01h): Supported LBA-Change 00:08:28.184 Read (02h): Supported 00:08:28.184 Compare (05h): Supported 00:08:28.184 Write Zeroes (08h): Supported LBA-Change 00:08:28.184 Dataset Management (09h): Supported LBA-Change 00:08:28.184 Unknown (0Ch): Supported 00:08:28.184 Unknown (12h): Supported 00:08:28.184 Copy (19h): Supported LBA-Change 00:08:28.184 Unknown (1Dh): Supported LBA-Change 00:08:28.184 00:08:28.184 Error Log 00:08:28.184 ========= 00:08:28.184 00:08:28.184 Arbitration 00:08:28.184 =========== 00:08:28.184 Arbitration Burst: no limit 00:08:28.184 00:08:28.184 Power Management 00:08:28.184 ================ 00:08:28.184 Number of Power States: 1 00:08:28.184 Current Power State: Power State #0 00:08:28.184 Power State #0: 00:08:28.184 Max Power: 25.00 W 00:08:28.184 Non-Operational State: Operational 00:08:28.184 Entry Latency: 16 microseconds 00:08:28.184 Exit Latency: 4 microseconds 00:08:28.184 Relative Read Throughput: 0 00:08:28.184 Relative Read Latency: 0 00:08:28.184 Relative Write Throughput: 0 00:08:28.184 Relative Write Latency: 0 00:08:28.184 Idle Powerted unexpected 00:08:28.184 [2024-12-11 13:49:21.163784] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 65196 terminated unexpected 00:08:28.184 : Not Reported 00:08:28.184 Active Power: Not Reported 00:08:28.184 Non-Operational Permissive Mode: Not Supported 00:08:28.184 00:08:28.184 Health Information 00:08:28.184 ================== 00:08:28.184 Critical Warnings: 00:08:28.184 Available Spare Space: OK 00:08:28.184 Temperature: OK 00:08:28.184 Device Reliability: OK 00:08:28.184 Read Only: No 00:08:28.184 Volatile Memory Backup: OK 00:08:28.184 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.184 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.184 Available Spare: 0% 00:08:28.184 Available Spare Threshold: 0% 00:08:28.184 Life Percentage Used: 0% 00:08:28.184 Data Units Read: 777 00:08:28.184 Data Units Written: 705 00:08:28.184 Host Read Commands: 37106 00:08:28.184 Host Write Commands: 36892 00:08:28.184 Controller Busy Time: 0 minutes 00:08:28.184 Power Cycles: 0 00:08:28.184 Power On Hours: 0 hours 00:08:28.184 Unsafe Shutdowns: 0 00:08:28.184 Unrecoverable Media Errors: 0 00:08:28.184 Lifetime Error Log Entries: 0 00:08:28.184 Warning Temperature Time: 0 minutes 00:08:28.184 Critical Temperature Time: 0 minutes 00:08:28.184 00:08:28.184 Number of Queues 00:08:28.184 ================ 00:08:28.184 Number of I/O Submission Queues: 64 00:08:28.184 Number of I/O Completion Queues: 64 00:08:28.184 00:08:28.184 ZNS Specific Controller Data 00:08:28.184 ============================ 00:08:28.184 Zone Append Size Limit: 0 00:08:28.184 00:08:28.184 00:08:28.184 Active Namespaces 00:08:28.184 ================= 00:08:28.184 Namespace ID:1 00:08:28.184 Error Recovery Timeout: Unlimited 00:08:28.184 Command Set Identifier: NVM (00h) 00:08:28.184 Deallocate: Supported 00:08:28.184 Deallocated/Unwritten Error: Supported 00:08:28.184 Deallocated Read Value: All 0x00 00:08:28.184 Deallocate in Write Zeroes: Not Supported 00:08:28.184 Deallocated Guard Field: 0xFFFF 00:08:28.184 Flush: Supported 00:08:28.184 Reservation: Not Supported 00:08:28.184 Metadata Transferred as: Separate Metadata Buffer 00:08:28.184 Namespace Sharing Capabilities: Private 00:08:28.184 Size (in LBAs): 1548666 (5GiB) 00:08:28.184 Capacity (in LBAs): 1548666 (5GiB) 00:08:28.184 Utilization (in LBAs): 1548666 (5GiB) 00:08:28.184 Thin Provisioning: Not Supported 00:08:28.184 Per-NS Atomic Units: No 00:08:28.184 Maximum Single Source Range Length: 128 00:08:28.184 Maximum Copy Length: 128 00:08:28.184 Maximum Source Range Count: 128 00:08:28.184 NGUID/EUI64 Never Reused: No 00:08:28.184 Namespace Write Protected: No 00:08:28.184 Number of LBA Formats: 8 00:08:28.184 Current LBA Format: LBA Format #07 00:08:28.184 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.184 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.184 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.184 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.184 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.184 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.184 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.184 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.184 00:08:28.184 NVM Specific Namespace Data 00:08:28.184 =========================== 00:08:28.184 Logical Block Storage Tag Mask: 0 00:08:28.184 Protection Information Capabilities: 00:08:28.184 16b Guard Protection Information Storage Tag Support: No 00:08:28.184 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.184 Storage Tag Check Read Support: No 00:08:28.184 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.184 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.184 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.184 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.184 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.184 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.184 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.184 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.184 ===================================================== 00:08:28.184 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:28.184 ===================================================== 00:08:28.184 Controller Capabilities/Features 00:08:28.184 ================================ 00:08:28.184 Vendor ID: 1b36 00:08:28.184 Subsystem Vendor ID: 1af4 00:08:28.184 Serial Number: 12341 00:08:28.184 Model Number: QEMU NVMe Ctrl 00:08:28.184 Firmware Version: 8.0.0 00:08:28.184 Recommended Arb Burst: 6 00:08:28.184 IEEE OUI Identifier: 00 54 52 00:08:28.184 Multi-path I/O 00:08:28.184 May have multiple subsystem ports: No 00:08:28.184 May have multiple controllers: No 00:08:28.184 Associated with SR-IOV VF: No 00:08:28.184 Max Data Transfer Size: 524288 00:08:28.184 Max Number of Namespaces: 256 00:08:28.184 Max Number of I/O Queues: 64 00:08:28.184 NVMe Specification Version (VS): 1.4 00:08:28.184 NVMe Specification Version (Identify): 1.4 00:08:28.184 Maximum Queue Entries: 2048 00:08:28.184 Contiguous Queues Required: Yes 00:08:28.184 Arbitration Mechanisms Supported 00:08:28.184 Weighted Round Robin: Not Supported 00:08:28.184 Vendor Specific: Not Supported 00:08:28.184 Reset Timeout: 7500 ms 00:08:28.184 Doorbell Stride: 4 bytes 00:08:28.184 NVM Subsystem Reset: Not Supported 00:08:28.184 Command Sets Supported 00:08:28.184 NVM Command Set: Supported 00:08:28.184 Boot Partition: Not Supported 00:08:28.184 Memory Page Size Minimum: 4096 bytes 00:08:28.184 Memory Page Size Maximum: 65536 bytes 00:08:28.184 Persistent Memory Region: Not Supported 00:08:28.184 Optional Asynchronous Events Supported 00:08:28.184 Namespace Attribute Notices: Supported 00:08:28.184 Firmware Activation Notices: Not Supported 00:08:28.184 ANA Change Notices: Not Supported 00:08:28.184 PLE Aggregate Log Change Notices: Not Supported 00:08:28.184 LBA Status Info Alert Notices: Not Supported 00:08:28.184 EGE Aggregate Log Change Notices: Not Supported 00:08:28.184 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.184 Zone Descriptor Change Notices: Not Supported 00:08:28.184 Discovery Log Change Notices: Not Supported 00:08:28.184 Controller Attributes 00:08:28.184 128-bit Host Identifier: Not Supported 00:08:28.184 Non-Operational Permissive Mode: Not Supported 00:08:28.184 NVM Sets: Not Supported 00:08:28.184 Read Recovery Levels: Not Supported 00:08:28.184 Endurance Groups: Not Supported 00:08:28.184 Predictable Latency Mode: Not Supported 00:08:28.184 Traffic Based Keep ALive: Not Supported 00:08:28.184 Namespace Granularity: Not Supported 00:08:28.184 SQ Associations: Not Supported 00:08:28.184 UUID List: Not Supported 00:08:28.184 Multi-Domain Subsystem: Not Supported 00:08:28.184 Fixed Capacity Management: Not Supported 00:08:28.184 Variable Capacity Management: Not Supported 00:08:28.184 Delete Endurance Group: Not Supported 00:08:28.184 Delete NVM Set: Not Supported 00:08:28.184 Extended LBA Formats Supported: Supported 00:08:28.184 Flexible Data Placement Supported: Not Supported 00:08:28.184 00:08:28.184 Controller Memory Buffer Support 00:08:28.184 ================================ 00:08:28.184 Supported: No 00:08:28.184 00:08:28.185 Persistent Memory Region Support 00:08:28.185 ================================ 00:08:28.185 Supported: No 00:08:28.185 00:08:28.185 Admin Command Set Attributes 00:08:28.185 ============================ 00:08:28.185 Security Send/Receive: Not Supported 00:08:28.185 Format NVM: Supported 00:08:28.185 Firmware Activate/Download: Not Supported 00:08:28.185 Namespace Management: Supported 00:08:28.185 Device Self-Test: Not Supported 00:08:28.185 Directives: Supported 00:08:28.185 NVMe-MI: Not Supported 00:08:28.185 Virtualization Management: Not Supported 00:08:28.185 Doorbell Buffer Config: Supported 00:08:28.185 Get LBA Status Capability: Not Supported 00:08:28.185 Command & Feature Lockdown Capability: Not Supported 00:08:28.185 Abort Command Limit: 4 00:08:28.185 Async Event Request Limit: 4 00:08:28.185 Number of Firmware Slots: N/A 00:08:28.185 Firmware Slot 1 Read-Only: N/A 00:08:28.185 Firmware Activation Without Reset: N/A 00:08:28.185 Multiple Update Detection Support: N/A 00:08:28.185 Firmware Update Granularity: No Information Provided 00:08:28.185 Per-Namespace SMART Log: Yes 00:08:28.185 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.185 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:28.185 Command Effects Log Page: Supported 00:08:28.185 Get Log Page Extended Data: Supported 00:08:28.185 Telemetry Log Pages: Not Supported 00:08:28.185 Persistent Event Log Pages: Not Supported 00:08:28.185 Supported Log Pages Log Page: May Support 00:08:28.185 Commands Supported & Effects Log Page: Not Supported 00:08:28.185 Feature Identifiers & Effects Log Page:May Support 00:08:28.185 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.185 Data Area 4 for Telemetry Log: Not Supported 00:08:28.185 Error Log Page Entries Supported: 1 00:08:28.185 Keep Alive: Not Supported 00:08:28.185 00:08:28.185 NVM Command Set Attributes 00:08:28.185 ========================== 00:08:28.185 Submission Queue Entry Size 00:08:28.185 Max: 64 00:08:28.185 Min: 64 00:08:28.185 Completion Queue Entry Size 00:08:28.185 Max: 16 00:08:28.185 Min: 16 00:08:28.185 Number of Namespaces: 256 00:08:28.185 Compare Command: Supported 00:08:28.185 Write Uncorrectable Command: Not Supported 00:08:28.185 Dataset Management Command: Supported 00:08:28.185 Write Zeroes Command: Supported 00:08:28.185 Set Features Save Field: Supported 00:08:28.185 Reservations: Not Supported 00:08:28.185 Timestamp: Supported 00:08:28.185 Copy: Supported 00:08:28.185 Volatile Write Cache: Present 00:08:28.185 Atomic Write Unit (Normal): 1 00:08:28.185 Atomic Write Unit (PFail): 1 00:08:28.185 Atomic Compare & Write Unit: 1 00:08:28.185 Fused Compare & Write: Not Supported 00:08:28.185 Scatter-Gather List 00:08:28.185 SGL Command Set: Supported 00:08:28.185 SGL Keyed: Not Supported 00:08:28.185 SGL Bit Bucket Descriptor: Not Supported 00:08:28.185 SGL Metadata Pointer: Not Supported 00:08:28.185 Oversized SGL: Not Supported 00:08:28.185 SGL Metadata Address: Not Supported 00:08:28.185 SGL Offset: Not Supported 00:08:28.185 Transport SGL Data Block: Not Supported 00:08:28.185 Replay Protected Memory Block: Not Supported 00:08:28.185 00:08:28.185 Firmware Slot Information 00:08:28.185 ========================= 00:08:28.185 Active slot: 1 00:08:28.185 Slot 1 Firmware Revision: 1.0 00:08:28.185 00:08:28.185 00:08:28.185 Commands Supported and Effects 00:08:28.185 ============================== 00:08:28.185 Admin Commands 00:08:28.185 -------------- 00:08:28.185 Delete I/O Submission Queue (00h): Supported 00:08:28.185 Create I/O Submission Queue (01h): Supported 00:08:28.185 Get Log Page (02h): Supported 00:08:28.185 Delete I/O Completion Queue (04h): Supported 00:08:28.185 Create I/O Completion Queue (05h): Supported 00:08:28.185 Identify (06h): Supported 00:08:28.185 Abort (08h): Supported 00:08:28.185 Set Features (09h): Supported 00:08:28.185 Get Features (0Ah): Supported 00:08:28.185 Asynchronous Event Request (0Ch): Supported 00:08:28.185 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.185 Directive Send (19h): Supported 00:08:28.185 Directive Receive (1Ah): Supported 00:08:28.185 Virtualization Management (1Ch): Supported 00:08:28.185 Doorbell Buffer Config (7Ch): Supported 00:08:28.185 Format NVM (80h): Supported LBA-Change 00:08:28.185 I/O Commands 00:08:28.185 ------------ 00:08:28.185 Flush (00h): Supported LBA-Change 00:08:28.185 Write (01h): Supported LBA-Change 00:08:28.185 Read (02h): Supported 00:08:28.185 Compare (05h): Supported 00:08:28.185 Write Zeroes (08h): Supported LBA-Change 00:08:28.185 Dataset Management (09h): Supported LBA-Change 00:08:28.185 Unknown (0Ch): Supported 00:08:28.185 Unknown (12h): Supported 00:08:28.185 Copy (19h): Supported LBA-Change 00:08:28.185 Unknown (1Dh): Supported LBA-Change 00:08:28.185 00:08:28.185 Error Log 00:08:28.185 ========= 00:08:28.185 00:08:28.185 Arbitration 00:08:28.185 =========== 00:08:28.185 Arbitration Burst: no limit 00:08:28.185 00:08:28.185 Power Management 00:08:28.185 ================ 00:08:28.185 Number of Power States: 1 00:08:28.185 Current Power State: Power State #0 00:08:28.185 Power State #0: 00:08:28.185 Max Power: 25.00 W 00:08:28.185 Non-Operational State: Operational 00:08:28.185 Entry Latency: 16 microseconds 00:08:28.185 Exit Latency: 4 microseconds 00:08:28.185 Relative Read Throughput: 0 00:08:28.185 Relative Read Latency: 0 00:08:28.185 Relative Write Throughput: 0 00:08:28.185 Relative Write Latency: 0 00:08:28.185 Idle Power: Not Reported 00:08:28.185 Active Power: Not Reported 00:08:28.185 Non-Operational Permissive Mode: Not Supported 00:08:28.185 00:08:28.185 Health Information 00:08:28.185 ================== 00:08:28.185 Critical Warnings: 00:08:28.185 Available Spare Space: OK 00:08:28.185 Temperature: [2024-12-11 13:49:21.164591] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 65196 terminated unexpected 00:08:28.185 OK 00:08:28.185 Device Reliability: OK 00:08:28.185 Read Only: No 00:08:28.185 Volatile Memory Backup: OK 00:08:28.185 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.185 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.185 Available Spare: 0% 00:08:28.185 Available Spare Threshold: 0% 00:08:28.185 Life Percentage Used: 0% 00:08:28.185 Data Units Read: 1192 00:08:28.185 Data Units Written: 1066 00:08:28.185 Host Read Commands: 55529 00:08:28.185 Host Write Commands: 54416 00:08:28.185 Controller Busy Time: 0 minutes 00:08:28.185 Power Cycles: 0 00:08:28.185 Power On Hours: 0 hours 00:08:28.185 Unsafe Shutdowns: 0 00:08:28.185 Unrecoverable Media Errors: 0 00:08:28.185 Lifetime Error Log Entries: 0 00:08:28.185 Warning Temperature Time: 0 minutes 00:08:28.185 Critical Temperature Time: 0 minutes 00:08:28.185 00:08:28.185 Number of Queues 00:08:28.185 ================ 00:08:28.185 Number of I/O Submission Queues: 64 00:08:28.185 Number of I/O Completion Queues: 64 00:08:28.185 00:08:28.185 ZNS Specific Controller Data 00:08:28.185 ============================ 00:08:28.185 Zone Append Size Limit: 0 00:08:28.185 00:08:28.185 00:08:28.185 Active Namespaces 00:08:28.185 ================= 00:08:28.185 Namespace ID:1 00:08:28.185 Error Recovery Timeout: Unlimited 00:08:28.185 Command Set Identifier: NVM (00h) 00:08:28.185 Deallocate: Supported 00:08:28.185 Deallocated/Unwritten Error: Supported 00:08:28.185 Deallocated Read Value: All 0x00 00:08:28.185 Deallocate in Write Zeroes: Not Supported 00:08:28.185 Deallocated Guard Field: 0xFFFF 00:08:28.185 Flush: Supported 00:08:28.185 Reservation: Not Supported 00:08:28.185 Namespace Sharing Capabilities: Private 00:08:28.185 Size (in LBAs): 1310720 (5GiB) 00:08:28.185 Capacity (in LBAs): 1310720 (5GiB) 00:08:28.185 Utilization (in LBAs): 1310720 (5GiB) 00:08:28.185 Thin Provisioning: Not Supported 00:08:28.185 Per-NS Atomic Units: No 00:08:28.185 Maximum Single Source Range Length: 128 00:08:28.185 Maximum Copy Length: 128 00:08:28.185 Maximum Source Range Count: 128 00:08:28.185 NGUID/EUI64 Never Reused: No 00:08:28.185 Namespace Write Protected: No 00:08:28.185 Number of LBA Formats: 8 00:08:28.185 Current LBA Format: LBA Format #04 00:08:28.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.185 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.185 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.185 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.185 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.185 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.185 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.185 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.185 00:08:28.185 NVM Specific Namespace Data 00:08:28.185 =========================== 00:08:28.185 Logical Block Storage Tag Mask: 0 00:08:28.185 Protection Information Capabilities: 00:08:28.185 16b Guard Protection Information Storage Tag Support: No 00:08:28.185 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.185 Storage Tag Check Read Support: No 00:08:28.186 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.186 ===================================================== 00:08:28.186 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:28.186 ===================================================== 00:08:28.186 Controller Capabilities/Features 00:08:28.186 ================================ 00:08:28.186 Vendor ID: 1b36 00:08:28.186 Subsystem Vendor ID: 1af4 00:08:28.186 Serial Number: 12343 00:08:28.186 Model Number: QEMU NVMe Ctrl 00:08:28.186 Firmware Version: 8.0.0 00:08:28.186 Recommended Arb Burst: 6 00:08:28.186 IEEE OUI Identifier: 00 54 52 00:08:28.186 Multi-path I/O 00:08:28.186 May have multiple subsystem ports: No 00:08:28.186 May have multiple controllers: Yes 00:08:28.186 Associated with SR-IOV VF: No 00:08:28.186 Max Data Transfer Size: 524288 00:08:28.186 Max Number of Namespaces: 256 00:08:28.186 Max Number of I/O Queues: 64 00:08:28.186 NVMe Specification Version (VS): 1.4 00:08:28.186 NVMe Specification Version (Identify): 1.4 00:08:28.186 Maximum Queue Entries: 2048 00:08:28.186 Contiguous Queues Required: Yes 00:08:28.186 Arbitration Mechanisms Supported 00:08:28.186 Weighted Round Robin: Not Supported 00:08:28.186 Vendor Specific: Not Supported 00:08:28.186 Reset Timeout: 7500 ms 00:08:28.186 Doorbell Stride: 4 bytes 00:08:28.186 NVM Subsystem Reset: Not Supported 00:08:28.186 Command Sets Supported 00:08:28.186 NVM Command Set: Supported 00:08:28.186 Boot Partition: Not Supported 00:08:28.186 Memory Page Size Minimum: 4096 bytes 00:08:28.186 Memory Page Size Maximum: 65536 bytes 00:08:28.186 Persistent Memory Region: Not Supported 00:08:28.186 Optional Asynchronous Events Supported 00:08:28.186 Namespace Attribute Notices: Supported 00:08:28.186 Firmware Activation Notices: Not Supported 00:08:28.186 ANA Change Notices: Not Supported 00:08:28.186 PLE Aggregate Log Change Notices: Not Supported 00:08:28.186 LBA Status Info Alert Notices: Not Supported 00:08:28.186 EGE Aggregate Log Change Notices: Not Supported 00:08:28.186 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.186 Zone Descriptor Change Notices: Not Supported 00:08:28.186 Discovery Log Change Notices: Not Supported 00:08:28.186 Controller Attributes 00:08:28.186 128-bit Host Identifier: Not Supported 00:08:28.186 Non-Operational Permissive Mode: Not Supported 00:08:28.186 NVM Sets: Not Supported 00:08:28.186 Read Recovery Levels: Not Supported 00:08:28.186 Endurance Groups: Supported 00:08:28.186 Predictable Latency Mode: Not Supported 00:08:28.186 Traffic Based Keep ALive: Not Supported 00:08:28.186 Namespace Granularity: Not Supported 00:08:28.186 SQ Associations: Not Supported 00:08:28.186 UUID List: Not Supported 00:08:28.186 Multi-Domain Subsystem: Not Supported 00:08:28.186 Fixed Capacity Management: Not Supported 00:08:28.186 Variable Capacity Management: Not Supported 00:08:28.186 Delete Endurance Group: Not Supported 00:08:28.186 Delete NVM Set: Not Supported 00:08:28.186 Extended LBA Formats Supported: Supported 00:08:28.186 Flexible Data Placement Supported: Supported 00:08:28.186 00:08:28.186 Controller Memory Buffer Support 00:08:28.186 ================================ 00:08:28.186 Supported: No 00:08:28.186 00:08:28.186 Persistent Memory Region Support 00:08:28.186 ================================ 00:08:28.186 Supported: No 00:08:28.186 00:08:28.186 Admin Command Set Attributes 00:08:28.186 ============================ 00:08:28.186 Security Send/Receive: Not Supported 00:08:28.186 Format NVM: Supported 00:08:28.186 Firmware Activate/Download: Not Supported 00:08:28.186 Namespace Management: Supported 00:08:28.186 Device Self-Test: Not Supported 00:08:28.186 Directives: Supported 00:08:28.186 NVMe-MI: Not Supported 00:08:28.186 Virtualization Management: Not Supported 00:08:28.186 Doorbell Buffer Config: Supported 00:08:28.186 Get LBA Status Capability: Not Supported 00:08:28.186 Command & Feature Lockdown Capability: Not Supported 00:08:28.186 Abort Command Limit: 4 00:08:28.186 Async Event Request Limit: 4 00:08:28.186 Number of Firmware Slots: N/A 00:08:28.186 Firmware Slot 1 Read-Only: N/A 00:08:28.186 Firmware Activation Without Reset: N/A 00:08:28.186 Multiple Update Detection Support: N/A 00:08:28.186 Firmware Update Granularity: No Information Provided 00:08:28.186 Per-Namespace SMART Log: Yes 00:08:28.186 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.186 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:28.186 Command Effects Log Page: Supported 00:08:28.186 Get Log Page Extended Data: Supported 00:08:28.186 Telemetry Log Pages: Not Supported 00:08:28.186 Persistent Event Log Pages: Not Supported 00:08:28.186 Supported Log Pages Log Page: May Support 00:08:28.186 Commands Supported & Effects Log Page: Not Supported 00:08:28.186 Feature Identifiers & Effects Log Page:May Support 00:08:28.186 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.186 Data Area 4 for Telemetry Log: Not Supported 00:08:28.186 Error Log Page Entries Supported: 1 00:08:28.186 Keep Alive: Not Supported 00:08:28.186 00:08:28.186 NVM Command Set Attributes 00:08:28.186 ========================== 00:08:28.186 Submission Queue Entry Size 00:08:28.186 Max: 64 00:08:28.186 Min: 64 00:08:28.186 Completion Queue Entry Size 00:08:28.186 Max: 16 00:08:28.186 Min: 16 00:08:28.186 Number of Namespaces: 256 00:08:28.186 Compare Command: Supported 00:08:28.186 Write Uncorrectable Command: Not Supported 00:08:28.186 Dataset Management Command: Supported 00:08:28.186 Write Zeroes Command: Supported 00:08:28.186 Set Features Save Field: Supported 00:08:28.186 Reservations: Not Supported 00:08:28.186 Timestamp: Supported 00:08:28.186 Copy: Supported 00:08:28.186 Volatile Write Cache: Present 00:08:28.186 Atomic Write Unit (Normal): 1 00:08:28.186 Atomic Write Unit (PFail): 1 00:08:28.186 Atomic Compare & Write Unit: 1 00:08:28.186 Fused Compare & Write: Not Supported 00:08:28.186 Scatter-Gather List 00:08:28.186 SGL Command Set: Supported 00:08:28.186 SGL Keyed: Not Supported 00:08:28.186 SGL Bit Bucket Descriptor: Not Supported 00:08:28.186 SGL Metadata Pointer: Not Supported 00:08:28.186 Oversized SGL: Not Supported 00:08:28.186 SGL Metadata Address: Not Supported 00:08:28.186 SGL Offset: Not Supported 00:08:28.186 Transport SGL Data Block: Not Supported 00:08:28.186 Replay Protected Memory Block: Not Supported 00:08:28.186 00:08:28.186 Firmware Slot Information 00:08:28.186 ========================= 00:08:28.186 Active slot: 1 00:08:28.186 Slot 1 Firmware Revision: 1.0 00:08:28.186 00:08:28.186 00:08:28.186 Commands Supported and Effects 00:08:28.186 ============================== 00:08:28.186 Admin Commands 00:08:28.186 -------------- 00:08:28.186 Delete I/O Submission Queue (00h): Supported 00:08:28.186 Create I/O Submission Queue (01h): Supported 00:08:28.186 Get Log Page (02h): Supported 00:08:28.186 Delete I/O Completion Queue (04h): Supported 00:08:28.186 Create I/O Completion Queue (05h): Supported 00:08:28.186 Identify (06h): Supported 00:08:28.186 Abort (08h): Supported 00:08:28.186 Set Features (09h): Supported 00:08:28.186 Get Features (0Ah): Supported 00:08:28.186 Asynchronous Event Request (0Ch): Supported 00:08:28.186 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.186 Directive Send (19h): Supported 00:08:28.186 Directive Receive (1Ah): Supported 00:08:28.186 Virtualization Management (1Ch): Supported 00:08:28.186 Doorbell Buffer Config (7Ch): Supported 00:08:28.186 Format NVM (80h): Supported LBA-Change 00:08:28.186 I/O Commands 00:08:28.186 ------------ 00:08:28.186 Flush (00h): Supported LBA-Change 00:08:28.186 Write (01h): Supported LBA-Change 00:08:28.186 Read (02h): Supported 00:08:28.186 Compare (05h): Supported 00:08:28.186 Write Zeroes (08h): Supported LBA-Change 00:08:28.186 Dataset Management (09h): Supported LBA-Change 00:08:28.186 Unknown (0Ch): Supported 00:08:28.186 Unknown (12h): Supported 00:08:28.186 Copy (19h): Supported LBA-Change 00:08:28.186 Unknown (1Dh): Supported LBA-Change 00:08:28.186 00:08:28.186 Error Log 00:08:28.186 ========= 00:08:28.186 00:08:28.186 Arbitration 00:08:28.186 =========== 00:08:28.186 Arbitration Burst: no limit 00:08:28.186 00:08:28.186 Power Management 00:08:28.186 ================ 00:08:28.186 Number of Power States: 1 00:08:28.186 Current Power State: Power State #0 00:08:28.186 Power State #0: 00:08:28.186 Max Power: 25.00 W 00:08:28.186 Non-Operational State: Operational 00:08:28.187 Entry Latency: 16 microseconds 00:08:28.187 Exit Latency: 4 microseconds 00:08:28.187 Relative Read Throughput: 0 00:08:28.187 Relative Read Latency: 0 00:08:28.187 Relative Write Throughput: 0 00:08:28.187 Relative Write Latency: 0 00:08:28.187 Idle Power: Not Reported 00:08:28.187 Active Power: Not Reported 00:08:28.187 Non-Operational Permissive Mode: Not Supported 00:08:28.187 00:08:28.187 Health Information 00:08:28.187 ================== 00:08:28.187 Critical Warnings: 00:08:28.187 Available Spare Space: OK 00:08:28.187 Temperature: OK 00:08:28.187 Device Reliability: OK 00:08:28.187 Read Only: No 00:08:28.187 Volatile Memory Backup: OK 00:08:28.187 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.187 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.187 Available Spare: 0% 00:08:28.187 Available Spare Threshold: 0% 00:08:28.187 Life Percentage Used: 0% 00:08:28.187 Data Units Read: 893 00:08:28.187 Data Units Written: 822 00:08:28.187 Host Read Commands: 38554 00:08:28.187 Host Write Commands: 37977 00:08:28.187 Controller Busy Time: 0 minutes 00:08:28.187 Power Cycles: 0 00:08:28.187 Power On Hours: 0 hours 00:08:28.187 Unsafe Shutdowns: 0 00:08:28.187 Unrecoverable Media Errors: 0 00:08:28.187 Lifetime Error Log Entries: 0 00:08:28.187 Warning Temperature Time: 0 minutes 00:08:28.187 Critical Temperature Time: 0 minutes 00:08:28.187 00:08:28.187 Number of Queues 00:08:28.187 ================ 00:08:28.187 Number of I/O Submission Queues: 64 00:08:28.187 Number of I/O Completion Queues: 64 00:08:28.187 00:08:28.187 ZNS Specific Controller Data 00:08:28.187 ============================ 00:08:28.187 Zone Append Size Limit: 0 00:08:28.187 00:08:28.187 00:08:28.187 Active Namespaces 00:08:28.187 ================= 00:08:28.187 Namespace ID:1 00:08:28.187 Error Recovery Timeout: Unlimited 00:08:28.187 Command Set Identifier: NVM (00h) 00:08:28.187 Deallocate: Supported 00:08:28.187 Deallocated/Unwritten Error: Supported 00:08:28.187 Deallocated Read Value: All 0x00 00:08:28.187 Deallocate in Write Zeroes: Not Supported 00:08:28.187 Deallocated Guard Field: 0xFFFF 00:08:28.187 Flush: Supported 00:08:28.187 Reservation: Not Supported 00:08:28.187 Namespace Sharing Capabilities: Multiple Controllers 00:08:28.187 Size (in LBAs): 262144 (1GiB) 00:08:28.187 Capacity (in LBAs): 262144 (1GiB) 00:08:28.187 Utilization (in LBAs): 262144 (1GiB) 00:08:28.187 Thin Provisioning: Not Supported 00:08:28.187 Per-NS Atomic Units: No 00:08:28.187 Maximum Single Source Range Length: 128 00:08:28.187 Maximum Copy Length: 128 00:08:28.187 Maximum Source Range Count: 128 00:08:28.187 NGUID/EUI64 Never Reused: No 00:08:28.187 Namespace Write Protected: No 00:08:28.187 Endurance group ID: 1 00:08:28.187 Number of LBA Formats: 8 00:08:28.187 Current LBA Format: LBA Format #04 00:08:28.187 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.187 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.187 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.187 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.187 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.187 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.187 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.187 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.187 00:08:28.187 Get Feature FDP: 00:08:28.187 ================ 00:08:28.187 Enabled: Yes 00:08:28.187 FDP configuration index: 0 00:08:28.187 00:08:28.187 FDP configurations log page 00:08:28.187 =========================== 00:08:28.187 Number of FDP configurations: 1 00:08:28.187 Version: 0 00:08:28.187 Size: 112 00:08:28.187 FDP Configuration Descriptor: 0 00:08:28.187 Descriptor Size: 96 00:08:28.187 Reclaim Group Identifier format: 2 00:08:28.187 FDP Volatile Write Cache: Not Present 00:08:28.187 FDP Configuration: Valid 00:08:28.187 Vendor Specific Size: 0 00:08:28.187 Number of Reclaim Groups: 2 00:08:28.187 Number of Recalim Unit Handles: 8 00:08:28.187 Max Placement Identifiers: 128 00:08:28.187 Number of Namespaces Suppprted: 256 00:08:28.187 Reclaim unit Nominal Size: 6000000 bytes 00:08:28.187 Estimated Reclaim Unit Time Limit: Not Reported 00:08:28.187 RUH Desc #000: RUH Type: Initially Isolated 00:08:28.187 RUH Desc #001: RUH Type: Initially Isolated 00:08:28.187 RUH Desc #002: RUH Type: Initially Isolated 00:08:28.187 RUH Desc #003: RUH Type: Initially Isolated 00:08:28.187 RUH Desc #004: RUH Type: Initially Isolated 00:08:28.187 RUH Desc #005: RUH Type: Initially Isolated 00:08:28.187 RUH Desc #006: RUH Type: Initially Isolated 00:08:28.187 RUH Desc #007: RUH Type: Initially Isolated 00:08:28.187 00:08:28.187 FDP reclaim unit handle usage log page 00:08:28.187 ====================================== 00:08:28.187 Number of Reclaim Unit Handles: 8 00:08:28.187 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:28.187 RUH Usage Desc #001: RUH Attributes: Unused 00:08:28.187 RUH Usage Desc #002: RUH Attributes: Unused 00:08:28.187 RUH Usage Desc #003: RUH Attributes: Unused 00:08:28.187 RUH Usage Desc #004: RUH Attributes: Unused 00:08:28.187 RUH Usage Desc #005: RUH Attributes: Unused 00:08:28.187 RUH Usage Desc #006: RUH Attributes: Unused 00:08:28.187 RUH Usage Desc #007: RUH Attributes: Unused 00:08:28.187 00:08:28.187 FDP statistics log page 00:08:28.187 ======================= 00:08:28.187 Host bytes with metadata written: 534945792 00:08:28.187 Med[2024-12-11 13:49:21.166151] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 65196 terminated unexpected 00:08:28.187 ia bytes with metadata written: 535003136 00:08:28.187 Media bytes erased: 0 00:08:28.187 00:08:28.187 FDP events log page 00:08:28.187 =================== 00:08:28.187 Number of FDP events: 0 00:08:28.187 00:08:28.187 NVM Specific Namespace Data 00:08:28.187 =========================== 00:08:28.187 Logical Block Storage Tag Mask: 0 00:08:28.187 Protection Information Capabilities: 00:08:28.187 16b Guard Protection Information Storage Tag Support: No 00:08:28.187 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.187 Storage Tag Check Read Support: No 00:08:28.187 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.187 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.187 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.187 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.187 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.187 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.187 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.187 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.187 ===================================================== 00:08:28.187 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:28.187 ===================================================== 00:08:28.187 Controller Capabilities/Features 00:08:28.187 ================================ 00:08:28.187 Vendor ID: 1b36 00:08:28.187 Subsystem Vendor ID: 1af4 00:08:28.187 Serial Number: 12342 00:08:28.187 Model Number: QEMU NVMe Ctrl 00:08:28.187 Firmware Version: 8.0.0 00:08:28.187 Recommended Arb Burst: 6 00:08:28.187 IEEE OUI Identifier: 00 54 52 00:08:28.187 Multi-path I/O 00:08:28.187 May have multiple subsystem ports: No 00:08:28.187 May have multiple controllers: No 00:08:28.187 Associated with SR-IOV VF: No 00:08:28.187 Max Data Transfer Size: 524288 00:08:28.187 Max Number of Namespaces: 256 00:08:28.187 Max Number of I/O Queues: 64 00:08:28.187 NVMe Specification Version (VS): 1.4 00:08:28.187 NVMe Specification Version (Identify): 1.4 00:08:28.187 Maximum Queue Entries: 2048 00:08:28.187 Contiguous Queues Required: Yes 00:08:28.187 Arbitration Mechanisms Supported 00:08:28.187 Weighted Round Robin: Not Supported 00:08:28.187 Vendor Specific: Not Supported 00:08:28.187 Reset Timeout: 7500 ms 00:08:28.187 Doorbell Stride: 4 bytes 00:08:28.188 NVM Subsystem Reset: Not Supported 00:08:28.188 Command Sets Supported 00:08:28.188 NVM Command Set: Supported 00:08:28.188 Boot Partition: Not Supported 00:08:28.188 Memory Page Size Minimum: 4096 bytes 00:08:28.188 Memory Page Size Maximum: 65536 bytes 00:08:28.188 Persistent Memory Region: Not Supported 00:08:28.188 Optional Asynchronous Events Supported 00:08:28.188 Namespace Attribute Notices: Supported 00:08:28.188 Firmware Activation Notices: Not Supported 00:08:28.188 ANA Change Notices: Not Supported 00:08:28.188 PLE Aggregate Log Change Notices: Not Supported 00:08:28.188 LBA Status Info Alert Notices: Not Supported 00:08:28.188 EGE Aggregate Log Change Notices: Not Supported 00:08:28.188 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.188 Zone Descriptor Change Notices: Not Supported 00:08:28.188 Discovery Log Change Notices: Not Supported 00:08:28.188 Controller Attributes 00:08:28.188 128-bit Host Identifier: Not Supported 00:08:28.188 Non-Operational Permissive Mode: Not Supported 00:08:28.188 NVM Sets: Not Supported 00:08:28.188 Read Recovery Levels: Not Supported 00:08:28.188 Endurance Groups: Not Supported 00:08:28.188 Predictable Latency Mode: Not Supported 00:08:28.188 Traffic Based Keep ALive: Not Supported 00:08:28.188 Namespace Granularity: Not Supported 00:08:28.188 SQ Associations: Not Supported 00:08:28.188 UUID List: Not Supported 00:08:28.188 Multi-Domain Subsystem: Not Supported 00:08:28.188 Fixed Capacity Management: Not Supported 00:08:28.188 Variable Capacity Management: Not Supported 00:08:28.188 Delete Endurance Group: Not Supported 00:08:28.188 Delete NVM Set: Not Supported 00:08:28.188 Extended LBA Formats Supported: Supported 00:08:28.188 Flexible Data Placement Supported: Not Supported 00:08:28.188 00:08:28.188 Controller Memory Buffer Support 00:08:28.188 ================================ 00:08:28.188 Supported: No 00:08:28.188 00:08:28.188 Persistent Memory Region Support 00:08:28.188 ================================ 00:08:28.188 Supported: No 00:08:28.188 00:08:28.188 Admin Command Set Attributes 00:08:28.188 ============================ 00:08:28.188 Security Send/Receive: Not Supported 00:08:28.188 Format NVM: Supported 00:08:28.188 Firmware Activate/Download: Not Supported 00:08:28.188 Namespace Management: Supported 00:08:28.188 Device Self-Test: Not Supported 00:08:28.188 Directives: Supported 00:08:28.188 NVMe-MI: Not Supported 00:08:28.188 Virtualization Management: Not Supported 00:08:28.188 Doorbell Buffer Config: Supported 00:08:28.188 Get LBA Status Capability: Not Supported 00:08:28.188 Command & Feature Lockdown Capability: Not Supported 00:08:28.188 Abort Command Limit: 4 00:08:28.188 Async Event Request Limit: 4 00:08:28.188 Number of Firmware Slots: N/A 00:08:28.188 Firmware Slot 1 Read-Only: N/A 00:08:28.188 Firmware Activation Without Reset: N/A 00:08:28.188 Multiple Update Detection Support: N/A 00:08:28.188 Firmware Update Granularity: No Information Provided 00:08:28.188 Per-Namespace SMART Log: Yes 00:08:28.188 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.188 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:28.188 Command Effects Log Page: Supported 00:08:28.188 Get Log Page Extended Data: Supported 00:08:28.188 Telemetry Log Pages: Not Supported 00:08:28.188 Persistent Event Log Pages: Not Supported 00:08:28.188 Supported Log Pages Log Page: May Support 00:08:28.188 Commands Supported & Effects Log Page: Not Supported 00:08:28.188 Feature Identifiers & Effects Log Page:May Support 00:08:28.188 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.188 Data Area 4 for Telemetry Log: Not Supported 00:08:28.188 Error Log Page Entries Supported: 1 00:08:28.188 Keep Alive: Not Supported 00:08:28.188 00:08:28.188 NVM Command Set Attributes 00:08:28.188 ========================== 00:08:28.188 Submission Queue Entry Size 00:08:28.188 Max: 64 00:08:28.188 Min: 64 00:08:28.188 Completion Queue Entry Size 00:08:28.188 Max: 16 00:08:28.188 Min: 16 00:08:28.188 Number of Namespaces: 256 00:08:28.188 Compare Command: Supported 00:08:28.188 Write Uncorrectable Command: Not Supported 00:08:28.188 Dataset Management Command: Supported 00:08:28.188 Write Zeroes Command: Supported 00:08:28.188 Set Features Save Field: Supported 00:08:28.188 Reservations: Not Supported 00:08:28.188 Timestamp: Supported 00:08:28.188 Copy: Supported 00:08:28.188 Volatile Write Cache: Present 00:08:28.188 Atomic Write Unit (Normal): 1 00:08:28.188 Atomic Write Unit (PFail): 1 00:08:28.188 Atomic Compare & Write Unit: 1 00:08:28.188 Fused Compare & Write: Not Supported 00:08:28.188 Scatter-Gather List 00:08:28.188 SGL Command Set: Supported 00:08:28.188 SGL Keyed: Not Supported 00:08:28.188 SGL Bit Bucket Descriptor: Not Supported 00:08:28.188 SGL Metadata Pointer: Not Supported 00:08:28.188 Oversized SGL: Not Supported 00:08:28.188 SGL Metadata Address: Not Supported 00:08:28.188 SGL Offset: Not Supported 00:08:28.188 Transport SGL Data Block: Not Supported 00:08:28.188 Replay Protected Memory Block: Not Supported 00:08:28.188 00:08:28.188 Firmware Slot Information 00:08:28.188 ========================= 00:08:28.188 Active slot: 1 00:08:28.188 Slot 1 Firmware Revision: 1.0 00:08:28.188 00:08:28.188 00:08:28.188 Commands Supported and Effects 00:08:28.188 ============================== 00:08:28.188 Admin Commands 00:08:28.188 -------------- 00:08:28.188 Delete I/O Submission Queue (00h): Supported 00:08:28.188 Create I/O Submission Queue (01h): Supported 00:08:28.188 Get Log Page (02h): Supported 00:08:28.188 Delete I/O Completion Queue (04h): Supported 00:08:28.188 Create I/O Completion Queue (05h): Supported 00:08:28.188 Identify (06h): Supported 00:08:28.188 Abort (08h): Supported 00:08:28.188 Set Features (09h): Supported 00:08:28.188 Get Features (0Ah): Supported 00:08:28.188 Asynchronous Event Request (0Ch): Supported 00:08:28.188 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.188 Directive Send (19h): Supported 00:08:28.188 Directive Receive (1Ah): Supported 00:08:28.188 Virtualization Management (1Ch): Supported 00:08:28.188 Doorbell Buffer Config (7Ch): Supported 00:08:28.188 Format NVM (80h): Supported LBA-Change 00:08:28.188 I/O Commands 00:08:28.188 ------------ 00:08:28.188 Flush (00h): Supported LBA-Change 00:08:28.188 Write (01h): Supported LBA-Change 00:08:28.188 Read (02h): Supported 00:08:28.188 Compare (05h): Supported 00:08:28.188 Write Zeroes (08h): Supported LBA-Change 00:08:28.188 Dataset Management (09h): Supported LBA-Change 00:08:28.188 Unknown (0Ch): Supported 00:08:28.188 Unknown (12h): Supported 00:08:28.188 Copy (19h): Supported LBA-Change 00:08:28.188 Unknown (1Dh): Supported LBA-Change 00:08:28.188 00:08:28.188 Error Log 00:08:28.188 ========= 00:08:28.188 00:08:28.188 Arbitration 00:08:28.188 =========== 00:08:28.188 Arbitration Burst: no limit 00:08:28.188 00:08:28.188 Power Management 00:08:28.188 ================ 00:08:28.188 Number of Power States: 1 00:08:28.188 Current Power State: Power State #0 00:08:28.188 Power State #0: 00:08:28.188 Max Power: 25.00 W 00:08:28.188 Non-Operational State: Operational 00:08:28.188 Entry Latency: 16 microseconds 00:08:28.188 Exit Latency: 4 microseconds 00:08:28.188 Relative Read Throughput: 0 00:08:28.188 Relative Read Latency: 0 00:08:28.188 Relative Write Throughput: 0 00:08:28.188 Relative Write Latency: 0 00:08:28.188 Idle Power: Not Reported 00:08:28.188 Active Power: Not Reported 00:08:28.188 Non-Operational Permissive Mode: Not Supported 00:08:28.188 00:08:28.188 Health Information 00:08:28.188 ================== 00:08:28.188 Critical Warnings: 00:08:28.189 Available Spare Space: OK 00:08:28.189 Temperature: OK 00:08:28.189 Device Reliability: OK 00:08:28.189 Read Only: No 00:08:28.189 Volatile Memory Backup: OK 00:08:28.189 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.189 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.189 Available Spare: 0% 00:08:28.189 Available Spare Threshold: 0% 00:08:28.189 Life Percentage Used: 0% 00:08:28.189 Data Units Read: 2467 00:08:28.189 Data Units Written: 2254 00:08:28.189 Host Read Commands: 113473 00:08:28.189 Host Write Commands: 111742 00:08:28.189 Controller Busy Time: 0 minutes 00:08:28.189 Power Cycles: 0 00:08:28.189 Power On Hours: 0 hours 00:08:28.189 Unsafe Shutdowns: 0 00:08:28.189 Unrecoverable Media Errors: 0 00:08:28.189 Lifetime Error Log Entries: 0 00:08:28.189 Warning Temperature Time: 0 minutes 00:08:28.189 Critical Temperature Time: 0 minutes 00:08:28.189 00:08:28.189 Number of Queues 00:08:28.189 ================ 00:08:28.189 Number of I/O Submission Queues: 64 00:08:28.189 Number of I/O Completion Queues: 64 00:08:28.189 00:08:28.189 ZNS Specific Controller Data 00:08:28.189 ============================ 00:08:28.189 Zone Append Size Limit: 0 00:08:28.189 00:08:28.189 00:08:28.189 Active Namespaces 00:08:28.189 ================= 00:08:28.189 Namespace ID:1 00:08:28.189 Error Recovery Timeout: Unlimited 00:08:28.189 Command Set Identifier: NVM (00h) 00:08:28.189 Deallocate: Supported 00:08:28.189 Deallocated/Unwritten Error: Supported 00:08:28.189 Deallocated Read Value: All 0x00 00:08:28.189 Deallocate in Write Zeroes: Not Supported 00:08:28.189 Deallocated Guard Field: 0xFFFF 00:08:28.189 Flush: Supported 00:08:28.189 Reservation: Not Supported 00:08:28.189 Namespace Sharing Capabilities: Private 00:08:28.189 Size (in LBAs): 1048576 (4GiB) 00:08:28.189 Capacity (in LBAs): 1048576 (4GiB) 00:08:28.189 Utilization (in LBAs): 1048576 (4GiB) 00:08:28.189 Thin Provisioning: Not Supported 00:08:28.189 Per-NS Atomic Units: No 00:08:28.189 Maximum Single Source Range Length: 128 00:08:28.189 Maximum Copy Length: 128 00:08:28.189 Maximum Source Range Count: 128 00:08:28.189 NGUID/EUI64 Never Reused: No 00:08:28.189 Namespace Write Protected: No 00:08:28.189 Number of LBA Formats: 8 00:08:28.189 Current LBA Format: LBA Format #04 00:08:28.189 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.189 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.189 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.189 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.189 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.189 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.189 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.189 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.189 00:08:28.189 NVM Specific Namespace Data 00:08:28.189 =========================== 00:08:28.189 Logical Block Storage Tag Mask: 0 00:08:28.189 Protection Information Capabilities: 00:08:28.189 16b Guard Protection Information Storage Tag Support: No 00:08:28.189 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.189 Storage Tag Check Read Support: No 00:08:28.189 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Namespace ID:2 00:08:28.189 Error Recovery Timeout: Unlimited 00:08:28.189 Command Set Identifier: NVM (00h) 00:08:28.189 Deallocate: Supported 00:08:28.189 Deallocated/Unwritten Error: Supported 00:08:28.189 Deallocated Read Value: All 0x00 00:08:28.189 Deallocate in Write Zeroes: Not Supported 00:08:28.189 Deallocated Guard Field: 0xFFFF 00:08:28.189 Flush: Supported 00:08:28.189 Reservation: Not Supported 00:08:28.189 Namespace Sharing Capabilities: Private 00:08:28.189 Size (in LBAs): 1048576 (4GiB) 00:08:28.189 Capacity (in LBAs): 1048576 (4GiB) 00:08:28.189 Utilization (in LBAs): 1048576 (4GiB) 00:08:28.189 Thin Provisioning: Not Supported 00:08:28.189 Per-NS Atomic Units: No 00:08:28.189 Maximum Single Source Range Length: 128 00:08:28.189 Maximum Copy Length: 128 00:08:28.189 Maximum Source Range Count: 128 00:08:28.189 NGUID/EUI64 Never Reused: No 00:08:28.189 Namespace Write Protected: No 00:08:28.189 Number of LBA Formats: 8 00:08:28.189 Current LBA Format: LBA Format #04 00:08:28.189 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.189 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.189 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.189 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.189 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.189 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.189 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.189 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.189 00:08:28.189 NVM Specific Namespace Data 00:08:28.189 =========================== 00:08:28.189 Logical Block Storage Tag Mask: 0 00:08:28.189 Protection Information Capabilities: 00:08:28.189 16b Guard Protection Information Storage Tag Support: No 00:08:28.189 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.189 Storage Tag Check Read Support: No 00:08:28.189 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Namespace ID:3 00:08:28.189 Error Recovery Timeout: Unlimited 00:08:28.189 Command Set Identifier: NVM (00h) 00:08:28.189 Deallocate: Supported 00:08:28.189 Deallocated/Unwritten Error: Supported 00:08:28.189 Deallocated Read Value: All 0x00 00:08:28.189 Deallocate in Write Zeroes: Not Supported 00:08:28.189 Deallocated Guard Field: 0xFFFF 00:08:28.189 Flush: Supported 00:08:28.189 Reservation: Not Supported 00:08:28.189 Namespace Sharing Capabilities: Private 00:08:28.189 Size (in LBAs): 1048576 (4GiB) 00:08:28.189 Capacity (in LBAs): 1048576 (4GiB) 00:08:28.189 Utilization (in LBAs): 1048576 (4GiB) 00:08:28.189 Thin Provisioning: Not Supported 00:08:28.189 Per-NS Atomic Units: No 00:08:28.189 Maximum Single Source Range Length: 128 00:08:28.189 Maximum Copy Length: 128 00:08:28.189 Maximum Source Range Count: 128 00:08:28.189 NGUID/EUI64 Never Reused: No 00:08:28.189 Namespace Write Protected: No 00:08:28.189 Number of LBA Formats: 8 00:08:28.189 Current LBA Format: LBA Format #04 00:08:28.189 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.189 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.189 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.189 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.189 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.189 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.189 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.189 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.189 00:08:28.189 NVM Specific Namespace Data 00:08:28.189 =========================== 00:08:28.189 Logical Block Storage Tag Mask: 0 00:08:28.189 Protection Information Capabilities: 00:08:28.189 16b Guard Protection Information Storage Tag Support: No 00:08:28.189 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.189 Storage Tag Check Read Support: No 00:08:28.189 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.189 13:49:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:28.189 13:49:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:28.758 ===================================================== 00:08:28.758 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:28.758 ===================================================== 00:08:28.758 Controller Capabilities/Features 00:08:28.758 ================================ 00:08:28.758 Vendor ID: 1b36 00:08:28.758 Subsystem Vendor ID: 1af4 00:08:28.758 Serial Number: 12340 00:08:28.758 Model Number: QEMU NVMe Ctrl 00:08:28.758 Firmware Version: 8.0.0 00:08:28.758 Recommended Arb Burst: 6 00:08:28.758 IEEE OUI Identifier: 00 54 52 00:08:28.758 Multi-path I/O 00:08:28.758 May have multiple subsystem ports: No 00:08:28.758 May have multiple controllers: No 00:08:28.758 Associated with SR-IOV VF: No 00:08:28.758 Max Data Transfer Size: 524288 00:08:28.758 Max Number of Namespaces: 256 00:08:28.758 Max Number of I/O Queues: 64 00:08:28.758 NVMe Specification Version (VS): 1.4 00:08:28.758 NVMe Specification Version (Identify): 1.4 00:08:28.758 Maximum Queue Entries: 2048 00:08:28.758 Contiguous Queues Required: Yes 00:08:28.758 Arbitration Mechanisms Supported 00:08:28.758 Weighted Round Robin: Not Supported 00:08:28.758 Vendor Specific: Not Supported 00:08:28.758 Reset Timeout: 7500 ms 00:08:28.758 Doorbell Stride: 4 bytes 00:08:28.758 NVM Subsystem Reset: Not Supported 00:08:28.758 Command Sets Supported 00:08:28.758 NVM Command Set: Supported 00:08:28.758 Boot Partition: Not Supported 00:08:28.758 Memory Page Size Minimum: 4096 bytes 00:08:28.758 Memory Page Size Maximum: 65536 bytes 00:08:28.758 Persistent Memory Region: Not Supported 00:08:28.758 Optional Asynchronous Events Supported 00:08:28.758 Namespace Attribute Notices: Supported 00:08:28.758 Firmware Activation Notices: Not Supported 00:08:28.758 ANA Change Notices: Not Supported 00:08:28.758 PLE Aggregate Log Change Notices: Not Supported 00:08:28.758 LBA Status Info Alert Notices: Not Supported 00:08:28.758 EGE Aggregate Log Change Notices: Not Supported 00:08:28.758 Normal NVM Subsystem Shutdown event: Not Supported 00:08:28.758 Zone Descriptor Change Notices: Not Supported 00:08:28.758 Discovery Log Change Notices: Not Supported 00:08:28.758 Controller Attributes 00:08:28.758 128-bit Host Identifier: Not Supported 00:08:28.758 Non-Operational Permissive Mode: Not Supported 00:08:28.758 NVM Sets: Not Supported 00:08:28.758 Read Recovery Levels: Not Supported 00:08:28.758 Endurance Groups: Not Supported 00:08:28.758 Predictable Latency Mode: Not Supported 00:08:28.758 Traffic Based Keep ALive: Not Supported 00:08:28.758 Namespace Granularity: Not Supported 00:08:28.758 SQ Associations: Not Supported 00:08:28.758 UUID List: Not Supported 00:08:28.758 Multi-Domain Subsystem: Not Supported 00:08:28.758 Fixed Capacity Management: Not Supported 00:08:28.758 Variable Capacity Management: Not Supported 00:08:28.758 Delete Endurance Group: Not Supported 00:08:28.758 Delete NVM Set: Not Supported 00:08:28.758 Extended LBA Formats Supported: Supported 00:08:28.758 Flexible Data Placement Supported: Not Supported 00:08:28.758 00:08:28.758 Controller Memory Buffer Support 00:08:28.758 ================================ 00:08:28.758 Supported: No 00:08:28.758 00:08:28.758 Persistent Memory Region Support 00:08:28.758 ================================ 00:08:28.758 Supported: No 00:08:28.758 00:08:28.758 Admin Command Set Attributes 00:08:28.758 ============================ 00:08:28.758 Security Send/Receive: Not Supported 00:08:28.758 Format NVM: Supported 00:08:28.758 Firmware Activate/Download: Not Supported 00:08:28.758 Namespace Management: Supported 00:08:28.758 Device Self-Test: Not Supported 00:08:28.758 Directives: Supported 00:08:28.758 NVMe-MI: Not Supported 00:08:28.758 Virtualization Management: Not Supported 00:08:28.758 Doorbell Buffer Config: Supported 00:08:28.758 Get LBA Status Capability: Not Supported 00:08:28.758 Command & Feature Lockdown Capability: Not Supported 00:08:28.758 Abort Command Limit: 4 00:08:28.758 Async Event Request Limit: 4 00:08:28.758 Number of Firmware Slots: N/A 00:08:28.758 Firmware Slot 1 Read-Only: N/A 00:08:28.758 Firmware Activation Without Reset: N/A 00:08:28.758 Multiple Update Detection Support: N/A 00:08:28.758 Firmware Update Granularity: No Information Provided 00:08:28.758 Per-Namespace SMART Log: Yes 00:08:28.758 Asymmetric Namespace Access Log Page: Not Supported 00:08:28.758 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:28.758 Command Effects Log Page: Supported 00:08:28.758 Get Log Page Extended Data: Supported 00:08:28.758 Telemetry Log Pages: Not Supported 00:08:28.758 Persistent Event Log Pages: Not Supported 00:08:28.758 Supported Log Pages Log Page: May Support 00:08:28.758 Commands Supported & Effects Log Page: Not Supported 00:08:28.758 Feature Identifiers & Effects Log Page:May Support 00:08:28.758 NVMe-MI Commands & Effects Log Page: May Support 00:08:28.758 Data Area 4 for Telemetry Log: Not Supported 00:08:28.758 Error Log Page Entries Supported: 1 00:08:28.758 Keep Alive: Not Supported 00:08:28.758 00:08:28.758 NVM Command Set Attributes 00:08:28.758 ========================== 00:08:28.758 Submission Queue Entry Size 00:08:28.758 Max: 64 00:08:28.758 Min: 64 00:08:28.758 Completion Queue Entry Size 00:08:28.758 Max: 16 00:08:28.758 Min: 16 00:08:28.758 Number of Namespaces: 256 00:08:28.758 Compare Command: Supported 00:08:28.758 Write Uncorrectable Command: Not Supported 00:08:28.758 Dataset Management Command: Supported 00:08:28.759 Write Zeroes Command: Supported 00:08:28.759 Set Features Save Field: Supported 00:08:28.759 Reservations: Not Supported 00:08:28.759 Timestamp: Supported 00:08:28.759 Copy: Supported 00:08:28.759 Volatile Write Cache: Present 00:08:28.759 Atomic Write Unit (Normal): 1 00:08:28.759 Atomic Write Unit (PFail): 1 00:08:28.759 Atomic Compare & Write Unit: 1 00:08:28.759 Fused Compare & Write: Not Supported 00:08:28.759 Scatter-Gather List 00:08:28.759 SGL Command Set: Supported 00:08:28.759 SGL Keyed: Not Supported 00:08:28.759 SGL Bit Bucket Descriptor: Not Supported 00:08:28.759 SGL Metadata Pointer: Not Supported 00:08:28.759 Oversized SGL: Not Supported 00:08:28.759 SGL Metadata Address: Not Supported 00:08:28.759 SGL Offset: Not Supported 00:08:28.759 Transport SGL Data Block: Not Supported 00:08:28.759 Replay Protected Memory Block: Not Supported 00:08:28.759 00:08:28.759 Firmware Slot Information 00:08:28.759 ========================= 00:08:28.759 Active slot: 1 00:08:28.759 Slot 1 Firmware Revision: 1.0 00:08:28.759 00:08:28.759 00:08:28.759 Commands Supported and Effects 00:08:28.759 ============================== 00:08:28.759 Admin Commands 00:08:28.759 -------------- 00:08:28.759 Delete I/O Submission Queue (00h): Supported 00:08:28.759 Create I/O Submission Queue (01h): Supported 00:08:28.759 Get Log Page (02h): Supported 00:08:28.759 Delete I/O Completion Queue (04h): Supported 00:08:28.759 Create I/O Completion Queue (05h): Supported 00:08:28.759 Identify (06h): Supported 00:08:28.759 Abort (08h): Supported 00:08:28.759 Set Features (09h): Supported 00:08:28.759 Get Features (0Ah): Supported 00:08:28.759 Asynchronous Event Request (0Ch): Supported 00:08:28.759 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:28.759 Directive Send (19h): Supported 00:08:28.759 Directive Receive (1Ah): Supported 00:08:28.759 Virtualization Management (1Ch): Supported 00:08:28.759 Doorbell Buffer Config (7Ch): Supported 00:08:28.759 Format NVM (80h): Supported LBA-Change 00:08:28.759 I/O Commands 00:08:28.759 ------------ 00:08:28.759 Flush (00h): Supported LBA-Change 00:08:28.759 Write (01h): Supported LBA-Change 00:08:28.759 Read (02h): Supported 00:08:28.759 Compare (05h): Supported 00:08:28.759 Write Zeroes (08h): Supported LBA-Change 00:08:28.759 Dataset Management (09h): Supported LBA-Change 00:08:28.759 Unknown (0Ch): Supported 00:08:28.759 Unknown (12h): Supported 00:08:28.759 Copy (19h): Supported LBA-Change 00:08:28.759 Unknown (1Dh): Supported LBA-Change 00:08:28.759 00:08:28.759 Error Log 00:08:28.759 ========= 00:08:28.759 00:08:28.759 Arbitration 00:08:28.759 =========== 00:08:28.759 Arbitration Burst: no limit 00:08:28.759 00:08:28.759 Power Management 00:08:28.759 ================ 00:08:28.759 Number of Power States: 1 00:08:28.759 Current Power State: Power State #0 00:08:28.759 Power State #0: 00:08:28.759 Max Power: 25.00 W 00:08:28.759 Non-Operational State: Operational 00:08:28.759 Entry Latency: 16 microseconds 00:08:28.759 Exit Latency: 4 microseconds 00:08:28.759 Relative Read Throughput: 0 00:08:28.759 Relative Read Latency: 0 00:08:28.759 Relative Write Throughput: 0 00:08:28.759 Relative Write Latency: 0 00:08:28.759 Idle Power: Not Reported 00:08:28.759 Active Power: Not Reported 00:08:28.759 Non-Operational Permissive Mode: Not Supported 00:08:28.759 00:08:28.759 Health Information 00:08:28.759 ================== 00:08:28.759 Critical Warnings: 00:08:28.759 Available Spare Space: OK 00:08:28.759 Temperature: OK 00:08:28.759 Device Reliability: OK 00:08:28.759 Read Only: No 00:08:28.759 Volatile Memory Backup: OK 00:08:28.759 Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.759 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:28.759 Available Spare: 0% 00:08:28.759 Available Spare Threshold: 0% 00:08:28.759 Life Percentage Used: 0% 00:08:28.759 Data Units Read: 777 00:08:28.759 Data Units Written: 705 00:08:28.759 Host Read Commands: 37106 00:08:28.759 Host Write Commands: 36892 00:08:28.759 Controller Busy Time: 0 minutes 00:08:28.759 Power Cycles: 0 00:08:28.759 Power On Hours: 0 hours 00:08:28.759 Unsafe Shutdowns: 0 00:08:28.759 Unrecoverable Media Errors: 0 00:08:28.759 Lifetime Error Log Entries: 0 00:08:28.759 Warning Temperature Time: 0 minutes 00:08:28.759 Critical Temperature Time: 0 minutes 00:08:28.759 00:08:28.759 Number of Queues 00:08:28.759 ================ 00:08:28.759 Number of I/O Submission Queues: 64 00:08:28.759 Number of I/O Completion Queues: 64 00:08:28.759 00:08:28.759 ZNS Specific Controller Data 00:08:28.759 ============================ 00:08:28.759 Zone Append Size Limit: 0 00:08:28.759 00:08:28.759 00:08:28.759 Active Namespaces 00:08:28.759 ================= 00:08:28.759 Namespace ID:1 00:08:28.759 Error Recovery Timeout: Unlimited 00:08:28.759 Command Set Identifier: NVM (00h) 00:08:28.759 Deallocate: Supported 00:08:28.759 Deallocated/Unwritten Error: Supported 00:08:28.759 Deallocated Read Value: All 0x00 00:08:28.759 Deallocate in Write Zeroes: Not Supported 00:08:28.759 Deallocated Guard Field: 0xFFFF 00:08:28.759 Flush: Supported 00:08:28.759 Reservation: Not Supported 00:08:28.759 Metadata Transferred as: Separate Metadata Buffer 00:08:28.759 Namespace Sharing Capabilities: Private 00:08:28.759 Size (in LBAs): 1548666 (5GiB) 00:08:28.759 Capacity (in LBAs): 1548666 (5GiB) 00:08:28.759 Utilization (in LBAs): 1548666 (5GiB) 00:08:28.759 Thin Provisioning: Not Supported 00:08:28.759 Per-NS Atomic Units: No 00:08:28.759 Maximum Single Source Range Length: 128 00:08:28.759 Maximum Copy Length: 128 00:08:28.759 Maximum Source Range Count: 128 00:08:28.759 NGUID/EUI64 Never Reused: No 00:08:28.759 Namespace Write Protected: No 00:08:28.759 Number of LBA Formats: 8 00:08:28.759 Current LBA Format: LBA Format #07 00:08:28.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:28.759 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:28.759 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:28.759 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:28.759 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:28.759 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:28.759 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:28.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:28.759 00:08:28.759 NVM Specific Namespace Data 00:08:28.759 =========================== 00:08:28.759 Logical Block Storage Tag Mask: 0 00:08:28.759 Protection Information Capabilities: 00:08:28.759 16b Guard Protection Information Storage Tag Support: No 00:08:28.759 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:28.759 Storage Tag Check Read Support: No 00:08:28.759 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.759 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.759 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.759 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.759 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.759 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.759 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.759 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:28.759 13:49:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:28.759 13:49:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:29.019 ===================================================== 00:08:29.019 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:29.019 ===================================================== 00:08:29.019 Controller Capabilities/Features 00:08:29.019 ================================ 00:08:29.019 Vendor ID: 1b36 00:08:29.019 Subsystem Vendor ID: 1af4 00:08:29.019 Serial Number: 12341 00:08:29.019 Model Number: QEMU NVMe Ctrl 00:08:29.019 Firmware Version: 8.0.0 00:08:29.019 Recommended Arb Burst: 6 00:08:29.019 IEEE OUI Identifier: 00 54 52 00:08:29.019 Multi-path I/O 00:08:29.019 May have multiple subsystem ports: No 00:08:29.019 May have multiple controllers: No 00:08:29.019 Associated with SR-IOV VF: No 00:08:29.019 Max Data Transfer Size: 524288 00:08:29.019 Max Number of Namespaces: 256 00:08:29.019 Max Number of I/O Queues: 64 00:08:29.019 NVMe Specification Version (VS): 1.4 00:08:29.019 NVMe Specification Version (Identify): 1.4 00:08:29.019 Maximum Queue Entries: 2048 00:08:29.019 Contiguous Queues Required: Yes 00:08:29.019 Arbitration Mechanisms Supported 00:08:29.019 Weighted Round Robin: Not Supported 00:08:29.019 Vendor Specific: Not Supported 00:08:29.019 Reset Timeout: 7500 ms 00:08:29.019 Doorbell Stride: 4 bytes 00:08:29.019 NVM Subsystem Reset: Not Supported 00:08:29.019 Command Sets Supported 00:08:29.019 NVM Command Set: Supported 00:08:29.019 Boot Partition: Not Supported 00:08:29.019 Memory Page Size Minimum: 4096 bytes 00:08:29.019 Memory Page Size Maximum: 65536 bytes 00:08:29.019 Persistent Memory Region: Not Supported 00:08:29.019 Optional Asynchronous Events Supported 00:08:29.019 Namespace Attribute Notices: Supported 00:08:29.019 Firmware Activation Notices: Not Supported 00:08:29.019 ANA Change Notices: Not Supported 00:08:29.019 PLE Aggregate Log Change Notices: Not Supported 00:08:29.019 LBA Status Info Alert Notices: Not Supported 00:08:29.019 EGE Aggregate Log Change Notices: Not Supported 00:08:29.019 Normal NVM Subsystem Shutdown event: Not Supported 00:08:29.019 Zone Descriptor Change Notices: Not Supported 00:08:29.019 Discovery Log Change Notices: Not Supported 00:08:29.019 Controller Attributes 00:08:29.019 128-bit Host Identifier: Not Supported 00:08:29.019 Non-Operational Permissive Mode: Not Supported 00:08:29.019 NVM Sets: Not Supported 00:08:29.019 Read Recovery Levels: Not Supported 00:08:29.019 Endurance Groups: Not Supported 00:08:29.019 Predictable Latency Mode: Not Supported 00:08:29.019 Traffic Based Keep ALive: Not Supported 00:08:29.019 Namespace Granularity: Not Supported 00:08:29.019 SQ Associations: Not Supported 00:08:29.019 UUID List: Not Supported 00:08:29.019 Multi-Domain Subsystem: Not Supported 00:08:29.019 Fixed Capacity Management: Not Supported 00:08:29.019 Variable Capacity Management: Not Supported 00:08:29.019 Delete Endurance Group: Not Supported 00:08:29.019 Delete NVM Set: Not Supported 00:08:29.019 Extended LBA Formats Supported: Supported 00:08:29.019 Flexible Data Placement Supported: Not Supported 00:08:29.019 00:08:29.019 Controller Memory Buffer Support 00:08:29.019 ================================ 00:08:29.019 Supported: No 00:08:29.019 00:08:29.019 Persistent Memory Region Support 00:08:29.019 ================================ 00:08:29.019 Supported: No 00:08:29.019 00:08:29.019 Admin Command Set Attributes 00:08:29.019 ============================ 00:08:29.019 Security Send/Receive: Not Supported 00:08:29.019 Format NVM: Supported 00:08:29.019 Firmware Activate/Download: Not Supported 00:08:29.019 Namespace Management: Supported 00:08:29.019 Device Self-Test: Not Supported 00:08:29.019 Directives: Supported 00:08:29.019 NVMe-MI: Not Supported 00:08:29.019 Virtualization Management: Not Supported 00:08:29.019 Doorbell Buffer Config: Supported 00:08:29.019 Get LBA Status Capability: Not Supported 00:08:29.019 Command & Feature Lockdown Capability: Not Supported 00:08:29.019 Abort Command Limit: 4 00:08:29.019 Async Event Request Limit: 4 00:08:29.019 Number of Firmware Slots: N/A 00:08:29.019 Firmware Slot 1 Read-Only: N/A 00:08:29.019 Firmware Activation Without Reset: N/A 00:08:29.019 Multiple Update Detection Support: N/A 00:08:29.019 Firmware Update Granularity: No Information Provided 00:08:29.019 Per-Namespace SMART Log: Yes 00:08:29.019 Asymmetric Namespace Access Log Page: Not Supported 00:08:29.019 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:29.019 Command Effects Log Page: Supported 00:08:29.019 Get Log Page Extended Data: Supported 00:08:29.019 Telemetry Log Pages: Not Supported 00:08:29.019 Persistent Event Log Pages: Not Supported 00:08:29.019 Supported Log Pages Log Page: May Support 00:08:29.019 Commands Supported & Effects Log Page: Not Supported 00:08:29.019 Feature Identifiers & Effects Log Page:May Support 00:08:29.019 NVMe-MI Commands & Effects Log Page: May Support 00:08:29.019 Data Area 4 for Telemetry Log: Not Supported 00:08:29.019 Error Log Page Entries Supported: 1 00:08:29.019 Keep Alive: Not Supported 00:08:29.019 00:08:29.019 NVM Command Set Attributes 00:08:29.019 ========================== 00:08:29.019 Submission Queue Entry Size 00:08:29.019 Max: 64 00:08:29.019 Min: 64 00:08:29.019 Completion Queue Entry Size 00:08:29.019 Max: 16 00:08:29.019 Min: 16 00:08:29.019 Number of Namespaces: 256 00:08:29.019 Compare Command: Supported 00:08:29.019 Write Uncorrectable Command: Not Supported 00:08:29.019 Dataset Management Command: Supported 00:08:29.019 Write Zeroes Command: Supported 00:08:29.019 Set Features Save Field: Supported 00:08:29.019 Reservations: Not Supported 00:08:29.019 Timestamp: Supported 00:08:29.019 Copy: Supported 00:08:29.020 Volatile Write Cache: Present 00:08:29.020 Atomic Write Unit (Normal): 1 00:08:29.020 Atomic Write Unit (PFail): 1 00:08:29.020 Atomic Compare & Write Unit: 1 00:08:29.020 Fused Compare & Write: Not Supported 00:08:29.020 Scatter-Gather List 00:08:29.020 SGL Command Set: Supported 00:08:29.020 SGL Keyed: Not Supported 00:08:29.020 SGL Bit Bucket Descriptor: Not Supported 00:08:29.020 SGL Metadata Pointer: Not Supported 00:08:29.020 Oversized SGL: Not Supported 00:08:29.020 SGL Metadata Address: Not Supported 00:08:29.020 SGL Offset: Not Supported 00:08:29.020 Transport SGL Data Block: Not Supported 00:08:29.020 Replay Protected Memory Block: Not Supported 00:08:29.020 00:08:29.020 Firmware Slot Information 00:08:29.020 ========================= 00:08:29.020 Active slot: 1 00:08:29.020 Slot 1 Firmware Revision: 1.0 00:08:29.020 00:08:29.020 00:08:29.020 Commands Supported and Effects 00:08:29.020 ============================== 00:08:29.020 Admin Commands 00:08:29.020 -------------- 00:08:29.020 Delete I/O Submission Queue (00h): Supported 00:08:29.020 Create I/O Submission Queue (01h): Supported 00:08:29.020 Get Log Page (02h): Supported 00:08:29.020 Delete I/O Completion Queue (04h): Supported 00:08:29.020 Create I/O Completion Queue (05h): Supported 00:08:29.020 Identify (06h): Supported 00:08:29.020 Abort (08h): Supported 00:08:29.020 Set Features (09h): Supported 00:08:29.020 Get Features (0Ah): Supported 00:08:29.020 Asynchronous Event Request (0Ch): Supported 00:08:29.020 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:29.020 Directive Send (19h): Supported 00:08:29.020 Directive Receive (1Ah): Supported 00:08:29.020 Virtualization Management (1Ch): Supported 00:08:29.020 Doorbell Buffer Config (7Ch): Supported 00:08:29.020 Format NVM (80h): Supported LBA-Change 00:08:29.020 I/O Commands 00:08:29.020 ------------ 00:08:29.020 Flush (00h): Supported LBA-Change 00:08:29.020 Write (01h): Supported LBA-Change 00:08:29.020 Read (02h): Supported 00:08:29.020 Compare (05h): Supported 00:08:29.020 Write Zeroes (08h): Supported LBA-Change 00:08:29.020 Dataset Management (09h): Supported LBA-Change 00:08:29.020 Unknown (0Ch): Supported 00:08:29.020 Unknown (12h): Supported 00:08:29.020 Copy (19h): Supported LBA-Change 00:08:29.020 Unknown (1Dh): Supported LBA-Change 00:08:29.020 00:08:29.020 Error Log 00:08:29.020 ========= 00:08:29.020 00:08:29.020 Arbitration 00:08:29.020 =========== 00:08:29.020 Arbitration Burst: no limit 00:08:29.020 00:08:29.020 Power Management 00:08:29.020 ================ 00:08:29.020 Number of Power States: 1 00:08:29.020 Current Power State: Power State #0 00:08:29.020 Power State #0: 00:08:29.020 Max Power: 25.00 W 00:08:29.020 Non-Operational State: Operational 00:08:29.020 Entry Latency: 16 microseconds 00:08:29.020 Exit Latency: 4 microseconds 00:08:29.020 Relative Read Throughput: 0 00:08:29.020 Relative Read Latency: 0 00:08:29.020 Relative Write Throughput: 0 00:08:29.020 Relative Write Latency: 0 00:08:29.020 Idle Power: Not Reported 00:08:29.020 Active Power: Not Reported 00:08:29.020 Non-Operational Permissive Mode: Not Supported 00:08:29.020 00:08:29.020 Health Information 00:08:29.020 ================== 00:08:29.020 Critical Warnings: 00:08:29.020 Available Spare Space: OK 00:08:29.020 Temperature: OK 00:08:29.020 Device Reliability: OK 00:08:29.020 Read Only: No 00:08:29.020 Volatile Memory Backup: OK 00:08:29.020 Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.020 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:29.020 Available Spare: 0% 00:08:29.020 Available Spare Threshold: 0% 00:08:29.020 Life Percentage Used: 0% 00:08:29.020 Data Units Read: 1192 00:08:29.020 Data Units Written: 1066 00:08:29.020 Host Read Commands: 55529 00:08:29.020 Host Write Commands: 54416 00:08:29.020 Controller Busy Time: 0 minutes 00:08:29.020 Power Cycles: 0 00:08:29.020 Power On Hours: 0 hours 00:08:29.020 Unsafe Shutdowns: 0 00:08:29.020 Unrecoverable Media Errors: 0 00:08:29.020 Lifetime Error Log Entries: 0 00:08:29.020 Warning Temperature Time: 0 minutes 00:08:29.020 Critical Temperature Time: 0 minutes 00:08:29.020 00:08:29.020 Number of Queues 00:08:29.020 ================ 00:08:29.020 Number of I/O Submission Queues: 64 00:08:29.020 Number of I/O Completion Queues: 64 00:08:29.020 00:08:29.020 ZNS Specific Controller Data 00:08:29.020 ============================ 00:08:29.020 Zone Append Size Limit: 0 00:08:29.020 00:08:29.020 00:08:29.020 Active Namespaces 00:08:29.020 ================= 00:08:29.020 Namespace ID:1 00:08:29.020 Error Recovery Timeout: Unlimited 00:08:29.020 Command Set Identifier: NVM (00h) 00:08:29.020 Deallocate: Supported 00:08:29.020 Deallocated/Unwritten Error: Supported 00:08:29.020 Deallocated Read Value: All 0x00 00:08:29.020 Deallocate in Write Zeroes: Not Supported 00:08:29.020 Deallocated Guard Field: 0xFFFF 00:08:29.020 Flush: Supported 00:08:29.020 Reservation: Not Supported 00:08:29.020 Namespace Sharing Capabilities: Private 00:08:29.020 Size (in LBAs): 1310720 (5GiB) 00:08:29.020 Capacity (in LBAs): 1310720 (5GiB) 00:08:29.020 Utilization (in LBAs): 1310720 (5GiB) 00:08:29.020 Thin Provisioning: Not Supported 00:08:29.020 Per-NS Atomic Units: No 00:08:29.020 Maximum Single Source Range Length: 128 00:08:29.020 Maximum Copy Length: 128 00:08:29.020 Maximum Source Range Count: 128 00:08:29.020 NGUID/EUI64 Never Reused: No 00:08:29.020 Namespace Write Protected: No 00:08:29.020 Number of LBA Formats: 8 00:08:29.020 Current LBA Format: LBA Format #04 00:08:29.020 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:29.020 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:29.020 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:29.020 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:29.020 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:29.020 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:29.020 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:29.020 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:29.020 00:08:29.020 NVM Specific Namespace Data 00:08:29.020 =========================== 00:08:29.020 Logical Block Storage Tag Mask: 0 00:08:29.020 Protection Information Capabilities: 00:08:29.020 16b Guard Protection Information Storage Tag Support: No 00:08:29.020 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:29.020 Storage Tag Check Read Support: No 00:08:29.020 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.020 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.020 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.020 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.020 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.020 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.020 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.020 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.020 13:49:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:29.020 13:49:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:29.280 ===================================================== 00:08:29.280 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:29.280 ===================================================== 00:08:29.280 Controller Capabilities/Features 00:08:29.280 ================================ 00:08:29.280 Vendor ID: 1b36 00:08:29.280 Subsystem Vendor ID: 1af4 00:08:29.280 Serial Number: 12342 00:08:29.280 Model Number: QEMU NVMe Ctrl 00:08:29.280 Firmware Version: 8.0.0 00:08:29.280 Recommended Arb Burst: 6 00:08:29.280 IEEE OUI Identifier: 00 54 52 00:08:29.280 Multi-path I/O 00:08:29.280 May have multiple subsystem ports: No 00:08:29.280 May have multiple controllers: No 00:08:29.280 Associated with SR-IOV VF: No 00:08:29.280 Max Data Transfer Size: 524288 00:08:29.280 Max Number of Namespaces: 256 00:08:29.280 Max Number of I/O Queues: 64 00:08:29.280 NVMe Specification Version (VS): 1.4 00:08:29.280 NVMe Specification Version (Identify): 1.4 00:08:29.280 Maximum Queue Entries: 2048 00:08:29.280 Contiguous Queues Required: Yes 00:08:29.280 Arbitration Mechanisms Supported 00:08:29.280 Weighted Round Robin: Not Supported 00:08:29.280 Vendor Specific: Not Supported 00:08:29.280 Reset Timeout: 7500 ms 00:08:29.280 Doorbell Stride: 4 bytes 00:08:29.280 NVM Subsystem Reset: Not Supported 00:08:29.280 Command Sets Supported 00:08:29.280 NVM Command Set: Supported 00:08:29.280 Boot Partition: Not Supported 00:08:29.280 Memory Page Size Minimum: 4096 bytes 00:08:29.280 Memory Page Size Maximum: 65536 bytes 00:08:29.280 Persistent Memory Region: Not Supported 00:08:29.280 Optional Asynchronous Events Supported 00:08:29.280 Namespace Attribute Notices: Supported 00:08:29.280 Firmware Activation Notices: Not Supported 00:08:29.280 ANA Change Notices: Not Supported 00:08:29.280 PLE Aggregate Log Change Notices: Not Supported 00:08:29.280 LBA Status Info Alert Notices: Not Supported 00:08:29.280 EGE Aggregate Log Change Notices: Not Supported 00:08:29.280 Normal NVM Subsystem Shutdown event: Not Supported 00:08:29.280 Zone Descriptor Change Notices: Not Supported 00:08:29.280 Discovery Log Change Notices: Not Supported 00:08:29.280 Controller Attributes 00:08:29.280 128-bit Host Identifier: Not Supported 00:08:29.280 Non-Operational Permissive Mode: Not Supported 00:08:29.280 NVM Sets: Not Supported 00:08:29.280 Read Recovery Levels: Not Supported 00:08:29.280 Endurance Groups: Not Supported 00:08:29.280 Predictable Latency Mode: Not Supported 00:08:29.280 Traffic Based Keep ALive: Not Supported 00:08:29.280 Namespace Granularity: Not Supported 00:08:29.280 SQ Associations: Not Supported 00:08:29.280 UUID List: Not Supported 00:08:29.281 Multi-Domain Subsystem: Not Supported 00:08:29.281 Fixed Capacity Management: Not Supported 00:08:29.281 Variable Capacity Management: Not Supported 00:08:29.281 Delete Endurance Group: Not Supported 00:08:29.281 Delete NVM Set: Not Supported 00:08:29.281 Extended LBA Formats Supported: Supported 00:08:29.281 Flexible Data Placement Supported: Not Supported 00:08:29.281 00:08:29.281 Controller Memory Buffer Support 00:08:29.281 ================================ 00:08:29.281 Supported: No 00:08:29.281 00:08:29.281 Persistent Memory Region Support 00:08:29.281 ================================ 00:08:29.281 Supported: No 00:08:29.281 00:08:29.281 Admin Command Set Attributes 00:08:29.281 ============================ 00:08:29.281 Security Send/Receive: Not Supported 00:08:29.281 Format NVM: Supported 00:08:29.281 Firmware Activate/Download: Not Supported 00:08:29.281 Namespace Management: Supported 00:08:29.281 Device Self-Test: Not Supported 00:08:29.281 Directives: Supported 00:08:29.281 NVMe-MI: Not Supported 00:08:29.281 Virtualization Management: Not Supported 00:08:29.281 Doorbell Buffer Config: Supported 00:08:29.281 Get LBA Status Capability: Not Supported 00:08:29.281 Command & Feature Lockdown Capability: Not Supported 00:08:29.281 Abort Command Limit: 4 00:08:29.281 Async Event Request Limit: 4 00:08:29.281 Number of Firmware Slots: N/A 00:08:29.281 Firmware Slot 1 Read-Only: N/A 00:08:29.281 Firmware Activation Without Reset: N/A 00:08:29.281 Multiple Update Detection Support: N/A 00:08:29.281 Firmware Update Granularity: No Information Provided 00:08:29.281 Per-Namespace SMART Log: Yes 00:08:29.281 Asymmetric Namespace Access Log Page: Not Supported 00:08:29.281 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:29.281 Command Effects Log Page: Supported 00:08:29.281 Get Log Page Extended Data: Supported 00:08:29.281 Telemetry Log Pages: Not Supported 00:08:29.281 Persistent Event Log Pages: Not Supported 00:08:29.281 Supported Log Pages Log Page: May Support 00:08:29.281 Commands Supported & Effects Log Page: Not Supported 00:08:29.281 Feature Identifiers & Effects Log Page:May Support 00:08:29.281 NVMe-MI Commands & Effects Log Page: May Support 00:08:29.281 Data Area 4 for Telemetry Log: Not Supported 00:08:29.281 Error Log Page Entries Supported: 1 00:08:29.281 Keep Alive: Not Supported 00:08:29.281 00:08:29.281 NVM Command Set Attributes 00:08:29.281 ========================== 00:08:29.281 Submission Queue Entry Size 00:08:29.281 Max: 64 00:08:29.281 Min: 64 00:08:29.281 Completion Queue Entry Size 00:08:29.281 Max: 16 00:08:29.281 Min: 16 00:08:29.281 Number of Namespaces: 256 00:08:29.281 Compare Command: Supported 00:08:29.281 Write Uncorrectable Command: Not Supported 00:08:29.281 Dataset Management Command: Supported 00:08:29.281 Write Zeroes Command: Supported 00:08:29.281 Set Features Save Field: Supported 00:08:29.281 Reservations: Not Supported 00:08:29.281 Timestamp: Supported 00:08:29.281 Copy: Supported 00:08:29.281 Volatile Write Cache: Present 00:08:29.281 Atomic Write Unit (Normal): 1 00:08:29.281 Atomic Write Unit (PFail): 1 00:08:29.281 Atomic Compare & Write Unit: 1 00:08:29.281 Fused Compare & Write: Not Supported 00:08:29.281 Scatter-Gather List 00:08:29.281 SGL Command Set: Supported 00:08:29.281 SGL Keyed: Not Supported 00:08:29.281 SGL Bit Bucket Descriptor: Not Supported 00:08:29.281 SGL Metadata Pointer: Not Supported 00:08:29.281 Oversized SGL: Not Supported 00:08:29.281 SGL Metadata Address: Not Supported 00:08:29.281 SGL Offset: Not Supported 00:08:29.281 Transport SGL Data Block: Not Supported 00:08:29.281 Replay Protected Memory Block: Not Supported 00:08:29.281 00:08:29.281 Firmware Slot Information 00:08:29.281 ========================= 00:08:29.281 Active slot: 1 00:08:29.281 Slot 1 Firmware Revision: 1.0 00:08:29.281 00:08:29.281 00:08:29.281 Commands Supported and Effects 00:08:29.281 ============================== 00:08:29.281 Admin Commands 00:08:29.281 -------------- 00:08:29.281 Delete I/O Submission Queue (00h): Supported 00:08:29.281 Create I/O Submission Queue (01h): Supported 00:08:29.281 Get Log Page (02h): Supported 00:08:29.281 Delete I/O Completion Queue (04h): Supported 00:08:29.281 Create I/O Completion Queue (05h): Supported 00:08:29.281 Identify (06h): Supported 00:08:29.281 Abort (08h): Supported 00:08:29.281 Set Features (09h): Supported 00:08:29.281 Get Features (0Ah): Supported 00:08:29.281 Asynchronous Event Request (0Ch): Supported 00:08:29.281 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:29.281 Directive Send (19h): Supported 00:08:29.281 Directive Receive (1Ah): Supported 00:08:29.281 Virtualization Management (1Ch): Supported 00:08:29.281 Doorbell Buffer Config (7Ch): Supported 00:08:29.281 Format NVM (80h): Supported LBA-Change 00:08:29.281 I/O Commands 00:08:29.281 ------------ 00:08:29.281 Flush (00h): Supported LBA-Change 00:08:29.281 Write (01h): Supported LBA-Change 00:08:29.281 Read (02h): Supported 00:08:29.281 Compare (05h): Supported 00:08:29.281 Write Zeroes (08h): Supported LBA-Change 00:08:29.281 Dataset Management (09h): Supported LBA-Change 00:08:29.281 Unknown (0Ch): Supported 00:08:29.281 Unknown (12h): Supported 00:08:29.281 Copy (19h): Supported LBA-Change 00:08:29.281 Unknown (1Dh): Supported LBA-Change 00:08:29.281 00:08:29.281 Error Log 00:08:29.281 ========= 00:08:29.281 00:08:29.281 Arbitration 00:08:29.281 =========== 00:08:29.281 Arbitration Burst: no limit 00:08:29.281 00:08:29.281 Power Management 00:08:29.281 ================ 00:08:29.281 Number of Power States: 1 00:08:29.281 Current Power State: Power State #0 00:08:29.281 Power State #0: 00:08:29.281 Max Power: 25.00 W 00:08:29.281 Non-Operational State: Operational 00:08:29.281 Entry Latency: 16 microseconds 00:08:29.281 Exit Latency: 4 microseconds 00:08:29.281 Relative Read Throughput: 0 00:08:29.281 Relative Read Latency: 0 00:08:29.281 Relative Write Throughput: 0 00:08:29.281 Relative Write Latency: 0 00:08:29.281 Idle Power: Not Reported 00:08:29.281 Active Power: Not Reported 00:08:29.281 Non-Operational Permissive Mode: Not Supported 00:08:29.281 00:08:29.281 Health Information 00:08:29.281 ================== 00:08:29.281 Critical Warnings: 00:08:29.281 Available Spare Space: OK 00:08:29.281 Temperature: OK 00:08:29.281 Device Reliability: OK 00:08:29.281 Read Only: No 00:08:29.281 Volatile Memory Backup: OK 00:08:29.281 Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.281 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:29.281 Available Spare: 0% 00:08:29.281 Available Spare Threshold: 0% 00:08:29.281 Life Percentage Used: 0% 00:08:29.281 Data Units Read: 2467 00:08:29.281 Data Units Written: 2254 00:08:29.281 Host Read Commands: 113473 00:08:29.281 Host Write Commands: 111742 00:08:29.281 Controller Busy Time: 0 minutes 00:08:29.281 Power Cycles: 0 00:08:29.281 Power On Hours: 0 hours 00:08:29.281 Unsafe Shutdowns: 0 00:08:29.281 Unrecoverable Media Errors: 0 00:08:29.281 Lifetime Error Log Entries: 0 00:08:29.281 Warning Temperature Time: 0 minutes 00:08:29.281 Critical Temperature Time: 0 minutes 00:08:29.281 00:08:29.281 Number of Queues 00:08:29.281 ================ 00:08:29.281 Number of I/O Submission Queues: 64 00:08:29.281 Number of I/O Completion Queues: 64 00:08:29.281 00:08:29.281 ZNS Specific Controller Data 00:08:29.281 ============================ 00:08:29.281 Zone Append Size Limit: 0 00:08:29.281 00:08:29.281 00:08:29.281 Active Namespaces 00:08:29.281 ================= 00:08:29.281 Namespace ID:1 00:08:29.281 Error Recovery Timeout: Unlimited 00:08:29.281 Command Set Identifier: NVM (00h) 00:08:29.281 Deallocate: Supported 00:08:29.281 Deallocated/Unwritten Error: Supported 00:08:29.281 Deallocated Read Value: All 0x00 00:08:29.281 Deallocate in Write Zeroes: Not Supported 00:08:29.281 Deallocated Guard Field: 0xFFFF 00:08:29.281 Flush: Supported 00:08:29.281 Reservation: Not Supported 00:08:29.281 Namespace Sharing Capabilities: Private 00:08:29.281 Size (in LBAs): 1048576 (4GiB) 00:08:29.281 Capacity (in LBAs): 1048576 (4GiB) 00:08:29.281 Utilization (in LBAs): 1048576 (4GiB) 00:08:29.281 Thin Provisioning: Not Supported 00:08:29.281 Per-NS Atomic Units: No 00:08:29.281 Maximum Single Source Range Length: 128 00:08:29.281 Maximum Copy Length: 128 00:08:29.281 Maximum Source Range Count: 128 00:08:29.281 NGUID/EUI64 Never Reused: No 00:08:29.281 Namespace Write Protected: No 00:08:29.281 Number of LBA Formats: 8 00:08:29.281 Current LBA Format: LBA Format #04 00:08:29.281 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:29.281 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:29.281 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:29.281 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:29.281 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:29.281 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:29.281 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:29.281 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:29.281 00:08:29.281 NVM Specific Namespace Data 00:08:29.281 =========================== 00:08:29.282 Logical Block Storage Tag Mask: 0 00:08:29.282 Protection Information Capabilities: 00:08:29.282 16b Guard Protection Information Storage Tag Support: No 00:08:29.282 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:29.282 Storage Tag Check Read Support: No 00:08:29.282 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Namespace ID:2 00:08:29.282 Error Recovery Timeout: Unlimited 00:08:29.282 Command Set Identifier: NVM (00h) 00:08:29.282 Deallocate: Supported 00:08:29.282 Deallocated/Unwritten Error: Supported 00:08:29.282 Deallocated Read Value: All 0x00 00:08:29.282 Deallocate in Write Zeroes: Not Supported 00:08:29.282 Deallocated Guard Field: 0xFFFF 00:08:29.282 Flush: Supported 00:08:29.282 Reservation: Not Supported 00:08:29.282 Namespace Sharing Capabilities: Private 00:08:29.282 Size (in LBAs): 1048576 (4GiB) 00:08:29.282 Capacity (in LBAs): 1048576 (4GiB) 00:08:29.282 Utilization (in LBAs): 1048576 (4GiB) 00:08:29.282 Thin Provisioning: Not Supported 00:08:29.282 Per-NS Atomic Units: No 00:08:29.282 Maximum Single Source Range Length: 128 00:08:29.282 Maximum Copy Length: 128 00:08:29.282 Maximum Source Range Count: 128 00:08:29.282 NGUID/EUI64 Never Reused: No 00:08:29.282 Namespace Write Protected: No 00:08:29.282 Number of LBA Formats: 8 00:08:29.282 Current LBA Format: LBA Format #04 00:08:29.282 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:29.282 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:29.282 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:29.282 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:29.282 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:29.282 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:29.282 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:29.282 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:29.282 00:08:29.282 NVM Specific Namespace Data 00:08:29.282 =========================== 00:08:29.282 Logical Block Storage Tag Mask: 0 00:08:29.282 Protection Information Capabilities: 00:08:29.282 16b Guard Protection Information Storage Tag Support: No 00:08:29.282 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:29.282 Storage Tag Check Read Support: No 00:08:29.282 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Namespace ID:3 00:08:29.282 Error Recovery Timeout: Unlimited 00:08:29.282 Command Set Identifier: NVM (00h) 00:08:29.282 Deallocate: Supported 00:08:29.282 Deallocated/Unwritten Error: Supported 00:08:29.282 Deallocated Read Value: All 0x00 00:08:29.282 Deallocate in Write Zeroes: Not Supported 00:08:29.282 Deallocated Guard Field: 0xFFFF 00:08:29.282 Flush: Supported 00:08:29.282 Reservation: Not Supported 00:08:29.282 Namespace Sharing Capabilities: Private 00:08:29.282 Size (in LBAs): 1048576 (4GiB) 00:08:29.282 Capacity (in LBAs): 1048576 (4GiB) 00:08:29.282 Utilization (in LBAs): 1048576 (4GiB) 00:08:29.282 Thin Provisioning: Not Supported 00:08:29.282 Per-NS Atomic Units: No 00:08:29.282 Maximum Single Source Range Length: 128 00:08:29.282 Maximum Copy Length: 128 00:08:29.282 Maximum Source Range Count: 128 00:08:29.282 NGUID/EUI64 Never Reused: No 00:08:29.282 Namespace Write Protected: No 00:08:29.282 Number of LBA Formats: 8 00:08:29.282 Current LBA Format: LBA Format #04 00:08:29.282 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:29.282 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:29.282 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:29.282 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:29.282 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:29.282 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:29.282 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:29.282 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:29.282 00:08:29.282 NVM Specific Namespace Data 00:08:29.282 =========================== 00:08:29.282 Logical Block Storage Tag Mask: 0 00:08:29.282 Protection Information Capabilities: 00:08:29.282 16b Guard Protection Information Storage Tag Support: No 00:08:29.282 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:29.282 Storage Tag Check Read Support: No 00:08:29.282 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.282 13:49:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:29.282 13:49:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:29.542 ===================================================== 00:08:29.542 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:29.542 ===================================================== 00:08:29.542 Controller Capabilities/Features 00:08:29.542 ================================ 00:08:29.542 Vendor ID: 1b36 00:08:29.542 Subsystem Vendor ID: 1af4 00:08:29.542 Serial Number: 12343 00:08:29.542 Model Number: QEMU NVMe Ctrl 00:08:29.542 Firmware Version: 8.0.0 00:08:29.542 Recommended Arb Burst: 6 00:08:29.542 IEEE OUI Identifier: 00 54 52 00:08:29.542 Multi-path I/O 00:08:29.542 May have multiple subsystem ports: No 00:08:29.542 May have multiple controllers: Yes 00:08:29.542 Associated with SR-IOV VF: No 00:08:29.542 Max Data Transfer Size: 524288 00:08:29.542 Max Number of Namespaces: 256 00:08:29.542 Max Number of I/O Queues: 64 00:08:29.542 NVMe Specification Version (VS): 1.4 00:08:29.542 NVMe Specification Version (Identify): 1.4 00:08:29.542 Maximum Queue Entries: 2048 00:08:29.542 Contiguous Queues Required: Yes 00:08:29.542 Arbitration Mechanisms Supported 00:08:29.542 Weighted Round Robin: Not Supported 00:08:29.542 Vendor Specific: Not Supported 00:08:29.542 Reset Timeout: 7500 ms 00:08:29.542 Doorbell Stride: 4 bytes 00:08:29.542 NVM Subsystem Reset: Not Supported 00:08:29.542 Command Sets Supported 00:08:29.542 NVM Command Set: Supported 00:08:29.542 Boot Partition: Not Supported 00:08:29.542 Memory Page Size Minimum: 4096 bytes 00:08:29.542 Memory Page Size Maximum: 65536 bytes 00:08:29.542 Persistent Memory Region: Not Supported 00:08:29.542 Optional Asynchronous Events Supported 00:08:29.542 Namespace Attribute Notices: Supported 00:08:29.542 Firmware Activation Notices: Not Supported 00:08:29.542 ANA Change Notices: Not Supported 00:08:29.542 PLE Aggregate Log Change Notices: Not Supported 00:08:29.542 LBA Status Info Alert Notices: Not Supported 00:08:29.542 EGE Aggregate Log Change Notices: Not Supported 00:08:29.542 Normal NVM Subsystem Shutdown event: Not Supported 00:08:29.542 Zone Descriptor Change Notices: Not Supported 00:08:29.542 Discovery Log Change Notices: Not Supported 00:08:29.542 Controller Attributes 00:08:29.542 128-bit Host Identifier: Not Supported 00:08:29.542 Non-Operational Permissive Mode: Not Supported 00:08:29.542 NVM Sets: Not Supported 00:08:29.542 Read Recovery Levels: Not Supported 00:08:29.542 Endurance Groups: Supported 00:08:29.542 Predictable Latency Mode: Not Supported 00:08:29.542 Traffic Based Keep ALive: Not Supported 00:08:29.542 Namespace Granularity: Not Supported 00:08:29.542 SQ Associations: Not Supported 00:08:29.542 UUID List: Not Supported 00:08:29.542 Multi-Domain Subsystem: Not Supported 00:08:29.542 Fixed Capacity Management: Not Supported 00:08:29.542 Variable Capacity Management: Not Supported 00:08:29.542 Delete Endurance Group: Not Supported 00:08:29.542 Delete NVM Set: Not Supported 00:08:29.542 Extended LBA Formats Supported: Supported 00:08:29.542 Flexible Data Placement Supported: Supported 00:08:29.542 00:08:29.542 Controller Memory Buffer Support 00:08:29.542 ================================ 00:08:29.542 Supported: No 00:08:29.542 00:08:29.542 Persistent Memory Region Support 00:08:29.542 ================================ 00:08:29.542 Supported: No 00:08:29.542 00:08:29.542 Admin Command Set Attributes 00:08:29.542 ============================ 00:08:29.542 Security Send/Receive: Not Supported 00:08:29.542 Format NVM: Supported 00:08:29.542 Firmware Activate/Download: Not Supported 00:08:29.542 Namespace Management: Supported 00:08:29.542 Device Self-Test: Not Supported 00:08:29.542 Directives: Supported 00:08:29.543 NVMe-MI: Not Supported 00:08:29.543 Virtualization Management: Not Supported 00:08:29.543 Doorbell Buffer Config: Supported 00:08:29.543 Get LBA Status Capability: Not Supported 00:08:29.543 Command & Feature Lockdown Capability: Not Supported 00:08:29.543 Abort Command Limit: 4 00:08:29.543 Async Event Request Limit: 4 00:08:29.543 Number of Firmware Slots: N/A 00:08:29.543 Firmware Slot 1 Read-Only: N/A 00:08:29.543 Firmware Activation Without Reset: N/A 00:08:29.543 Multiple Update Detection Support: N/A 00:08:29.543 Firmware Update Granularity: No Information Provided 00:08:29.543 Per-Namespace SMART Log: Yes 00:08:29.543 Asymmetric Namespace Access Log Page: Not Supported 00:08:29.543 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:29.543 Command Effects Log Page: Supported 00:08:29.543 Get Log Page Extended Data: Supported 00:08:29.543 Telemetry Log Pages: Not Supported 00:08:29.543 Persistent Event Log Pages: Not Supported 00:08:29.543 Supported Log Pages Log Page: May Support 00:08:29.543 Commands Supported & Effects Log Page: Not Supported 00:08:29.543 Feature Identifiers & Effects Log Page:May Support 00:08:29.543 NVMe-MI Commands & Effects Log Page: May Support 00:08:29.543 Data Area 4 for Telemetry Log: Not Supported 00:08:29.543 Error Log Page Entries Supported: 1 00:08:29.543 Keep Alive: Not Supported 00:08:29.543 00:08:29.543 NVM Command Set Attributes 00:08:29.543 ========================== 00:08:29.543 Submission Queue Entry Size 00:08:29.543 Max: 64 00:08:29.543 Min: 64 00:08:29.543 Completion Queue Entry Size 00:08:29.543 Max: 16 00:08:29.543 Min: 16 00:08:29.543 Number of Namespaces: 256 00:08:29.543 Compare Command: Supported 00:08:29.543 Write Uncorrectable Command: Not Supported 00:08:29.543 Dataset Management Command: Supported 00:08:29.543 Write Zeroes Command: Supported 00:08:29.543 Set Features Save Field: Supported 00:08:29.543 Reservations: Not Supported 00:08:29.543 Timestamp: Supported 00:08:29.543 Copy: Supported 00:08:29.543 Volatile Write Cache: Present 00:08:29.543 Atomic Write Unit (Normal): 1 00:08:29.543 Atomic Write Unit (PFail): 1 00:08:29.543 Atomic Compare & Write Unit: 1 00:08:29.543 Fused Compare & Write: Not Supported 00:08:29.543 Scatter-Gather List 00:08:29.543 SGL Command Set: Supported 00:08:29.543 SGL Keyed: Not Supported 00:08:29.543 SGL Bit Bucket Descriptor: Not Supported 00:08:29.543 SGL Metadata Pointer: Not Supported 00:08:29.543 Oversized SGL: Not Supported 00:08:29.543 SGL Metadata Address: Not Supported 00:08:29.543 SGL Offset: Not Supported 00:08:29.543 Transport SGL Data Block: Not Supported 00:08:29.543 Replay Protected Memory Block: Not Supported 00:08:29.543 00:08:29.543 Firmware Slot Information 00:08:29.543 ========================= 00:08:29.543 Active slot: 1 00:08:29.543 Slot 1 Firmware Revision: 1.0 00:08:29.543 00:08:29.543 00:08:29.543 Commands Supported and Effects 00:08:29.543 ============================== 00:08:29.543 Admin Commands 00:08:29.543 -------------- 00:08:29.543 Delete I/O Submission Queue (00h): Supported 00:08:29.543 Create I/O Submission Queue (01h): Supported 00:08:29.543 Get Log Page (02h): Supported 00:08:29.543 Delete I/O Completion Queue (04h): Supported 00:08:29.543 Create I/O Completion Queue (05h): Supported 00:08:29.543 Identify (06h): Supported 00:08:29.543 Abort (08h): Supported 00:08:29.543 Set Features (09h): Supported 00:08:29.543 Get Features (0Ah): Supported 00:08:29.543 Asynchronous Event Request (0Ch): Supported 00:08:29.543 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:29.543 Directive Send (19h): Supported 00:08:29.543 Directive Receive (1Ah): Supported 00:08:29.543 Virtualization Management (1Ch): Supported 00:08:29.543 Doorbell Buffer Config (7Ch): Supported 00:08:29.543 Format NVM (80h): Supported LBA-Change 00:08:29.543 I/O Commands 00:08:29.543 ------------ 00:08:29.543 Flush (00h): Supported LBA-Change 00:08:29.543 Write (01h): Supported LBA-Change 00:08:29.543 Read (02h): Supported 00:08:29.543 Compare (05h): Supported 00:08:29.543 Write Zeroes (08h): Supported LBA-Change 00:08:29.543 Dataset Management (09h): Supported LBA-Change 00:08:29.543 Unknown (0Ch): Supported 00:08:29.543 Unknown (12h): Supported 00:08:29.543 Copy (19h): Supported LBA-Change 00:08:29.543 Unknown (1Dh): Supported LBA-Change 00:08:29.543 00:08:29.543 Error Log 00:08:29.543 ========= 00:08:29.543 00:08:29.543 Arbitration 00:08:29.543 =========== 00:08:29.543 Arbitration Burst: no limit 00:08:29.543 00:08:29.543 Power Management 00:08:29.543 ================ 00:08:29.543 Number of Power States: 1 00:08:29.543 Current Power State: Power State #0 00:08:29.543 Power State #0: 00:08:29.543 Max Power: 25.00 W 00:08:29.543 Non-Operational State: Operational 00:08:29.543 Entry Latency: 16 microseconds 00:08:29.543 Exit Latency: 4 microseconds 00:08:29.543 Relative Read Throughput: 0 00:08:29.543 Relative Read Latency: 0 00:08:29.543 Relative Write Throughput: 0 00:08:29.543 Relative Write Latency: 0 00:08:29.543 Idle Power: Not Reported 00:08:29.543 Active Power: Not Reported 00:08:29.543 Non-Operational Permissive Mode: Not Supported 00:08:29.543 00:08:29.543 Health Information 00:08:29.543 ================== 00:08:29.543 Critical Warnings: 00:08:29.543 Available Spare Space: OK 00:08:29.543 Temperature: OK 00:08:29.543 Device Reliability: OK 00:08:29.543 Read Only: No 00:08:29.543 Volatile Memory Backup: OK 00:08:29.543 Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.543 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:29.543 Available Spare: 0% 00:08:29.543 Available Spare Threshold: 0% 00:08:29.543 Life Percentage Used: 0% 00:08:29.543 Data Units Read: 893 00:08:29.543 Data Units Written: 822 00:08:29.543 Host Read Commands: 38554 00:08:29.543 Host Write Commands: 37977 00:08:29.543 Controller Busy Time: 0 minutes 00:08:29.543 Power Cycles: 0 00:08:29.543 Power On Hours: 0 hours 00:08:29.543 Unsafe Shutdowns: 0 00:08:29.543 Unrecoverable Media Errors: 0 00:08:29.543 Lifetime Error Log Entries: 0 00:08:29.543 Warning Temperature Time: 0 minutes 00:08:29.543 Critical Temperature Time: 0 minutes 00:08:29.543 00:08:29.543 Number of Queues 00:08:29.543 ================ 00:08:29.543 Number of I/O Submission Queues: 64 00:08:29.543 Number of I/O Completion Queues: 64 00:08:29.543 00:08:29.543 ZNS Specific Controller Data 00:08:29.543 ============================ 00:08:29.543 Zone Append Size Limit: 0 00:08:29.543 00:08:29.543 00:08:29.543 Active Namespaces 00:08:29.543 ================= 00:08:29.543 Namespace ID:1 00:08:29.543 Error Recovery Timeout: Unlimited 00:08:29.543 Command Set Identifier: NVM (00h) 00:08:29.543 Deallocate: Supported 00:08:29.543 Deallocated/Unwritten Error: Supported 00:08:29.543 Deallocated Read Value: All 0x00 00:08:29.543 Deallocate in Write Zeroes: Not Supported 00:08:29.543 Deallocated Guard Field: 0xFFFF 00:08:29.543 Flush: Supported 00:08:29.543 Reservation: Not Supported 00:08:29.543 Namespace Sharing Capabilities: Multiple Controllers 00:08:29.543 Size (in LBAs): 262144 (1GiB) 00:08:29.543 Capacity (in LBAs): 262144 (1GiB) 00:08:29.543 Utilization (in LBAs): 262144 (1GiB) 00:08:29.543 Thin Provisioning: Not Supported 00:08:29.543 Per-NS Atomic Units: No 00:08:29.543 Maximum Single Source Range Length: 128 00:08:29.543 Maximum Copy Length: 128 00:08:29.543 Maximum Source Range Count: 128 00:08:29.543 NGUID/EUI64 Never Reused: No 00:08:29.543 Namespace Write Protected: No 00:08:29.543 Endurance group ID: 1 00:08:29.543 Number of LBA Formats: 8 00:08:29.543 Current LBA Format: LBA Format #04 00:08:29.543 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:29.543 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:29.543 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:29.543 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:29.543 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:29.543 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:29.543 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:29.543 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:29.543 00:08:29.543 Get Feature FDP: 00:08:29.543 ================ 00:08:29.543 Enabled: Yes 00:08:29.543 FDP configuration index: 0 00:08:29.543 00:08:29.543 FDP configurations log page 00:08:29.543 =========================== 00:08:29.543 Number of FDP configurations: 1 00:08:29.543 Version: 0 00:08:29.543 Size: 112 00:08:29.543 FDP Configuration Descriptor: 0 00:08:29.543 Descriptor Size: 96 00:08:29.543 Reclaim Group Identifier format: 2 00:08:29.543 FDP Volatile Write Cache: Not Present 00:08:29.543 FDP Configuration: Valid 00:08:29.543 Vendor Specific Size: 0 00:08:29.543 Number of Reclaim Groups: 2 00:08:29.543 Number of Recalim Unit Handles: 8 00:08:29.543 Max Placement Identifiers: 128 00:08:29.543 Number of Namespaces Suppprted: 256 00:08:29.543 Reclaim unit Nominal Size: 6000000 bytes 00:08:29.543 Estimated Reclaim Unit Time Limit: Not Reported 00:08:29.543 RUH Desc #000: RUH Type: Initially Isolated 00:08:29.544 RUH Desc #001: RUH Type: Initially Isolated 00:08:29.544 RUH Desc #002: RUH Type: Initially Isolated 00:08:29.544 RUH Desc #003: RUH Type: Initially Isolated 00:08:29.544 RUH Desc #004: RUH Type: Initially Isolated 00:08:29.544 RUH Desc #005: RUH Type: Initially Isolated 00:08:29.544 RUH Desc #006: RUH Type: Initially Isolated 00:08:29.544 RUH Desc #007: RUH Type: Initially Isolated 00:08:29.544 00:08:29.544 FDP reclaim unit handle usage log page 00:08:29.544 ====================================== 00:08:29.544 Number of Reclaim Unit Handles: 8 00:08:29.544 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:29.544 RUH Usage Desc #001: RUH Attributes: Unused 00:08:29.544 RUH Usage Desc #002: RUH Attributes: Unused 00:08:29.544 RUH Usage Desc #003: RUH Attributes: Unused 00:08:29.544 RUH Usage Desc #004: RUH Attributes: Unused 00:08:29.544 RUH Usage Desc #005: RUH Attributes: Unused 00:08:29.544 RUH Usage Desc #006: RUH Attributes: Unused 00:08:29.544 RUH Usage Desc #007: RUH Attributes: Unused 00:08:29.544 00:08:29.544 FDP statistics log page 00:08:29.544 ======================= 00:08:29.544 Host bytes with metadata written: 534945792 00:08:29.544 Media bytes with metadata written: 535003136 00:08:29.544 Media bytes erased: 0 00:08:29.544 00:08:29.544 FDP events log page 00:08:29.544 =================== 00:08:29.544 Number of FDP events: 0 00:08:29.544 00:08:29.544 NVM Specific Namespace Data 00:08:29.544 =========================== 00:08:29.544 Logical Block Storage Tag Mask: 0 00:08:29.544 Protection Information Capabilities: 00:08:29.544 16b Guard Protection Information Storage Tag Support: No 00:08:29.544 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:29.544 Storage Tag Check Read Support: No 00:08:29.544 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.544 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.544 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.544 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.544 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.544 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.544 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.544 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:29.544 00:08:29.544 real 0m1.747s 00:08:29.544 user 0m0.629s 00:08:29.544 sys 0m0.884s 00:08:29.544 13:49:22 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.544 13:49:22 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:29.544 ************************************ 00:08:29.544 END TEST nvme_identify 00:08:29.544 ************************************ 00:08:29.803 13:49:22 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:29.803 13:49:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.803 13:49:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.803 13:49:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.803 ************************************ 00:08:29.803 START TEST nvme_perf 00:08:29.803 ************************************ 00:08:29.803 13:49:22 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:29.803 13:49:22 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:31.182 Initializing NVMe Controllers 00:08:31.182 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.182 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.182 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.182 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.182 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:31.182 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:31.182 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:31.182 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:31.182 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:31.182 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:31.182 Initialization complete. Launching workers. 00:08:31.182 ======================================================== 00:08:31.182 Latency(us) 00:08:31.182 Device Information : IOPS MiB/s Average min max 00:08:31.182 PCIE (0000:00:10.0) NSID 1 from core 0: 14214.63 166.58 9023.98 7854.23 51317.70 00:08:31.182 PCIE (0000:00:11.0) NSID 1 from core 0: 14214.63 166.58 9009.81 7836.28 49271.79 00:08:31.182 PCIE (0000:00:13.0) NSID 1 from core 0: 14214.63 166.58 8994.00 7955.08 47802.83 00:08:31.182 PCIE (0000:00:12.0) NSID 1 from core 0: 14214.63 166.58 8978.69 7959.58 45763.52 00:08:31.182 PCIE (0000:00:12.0) NSID 2 from core 0: 14214.63 166.58 8963.27 7926.41 43768.60 00:08:31.182 PCIE (0000:00:12.0) NSID 3 from core 0: 14278.37 167.32 8908.07 7953.85 36895.67 00:08:31.182 ======================================================== 00:08:31.182 Total : 85351.52 1000.21 8979.58 7836.28 51317.70 00:08:31.182 00:08:31.182 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:31.182 ================================================================================= 00:08:31.182 1.00000% : 8001.182us 00:08:31.182 10.00000% : 8211.740us 00:08:31.182 25.00000% : 8422.297us 00:08:31.182 50.00000% : 8685.494us 00:08:31.182 75.00000% : 8948.691us 00:08:31.182 90.00000% : 9159.248us 00:08:31.182 95.00000% : 9369.806us 00:08:31.182 98.00000% : 10843.708us 00:08:31.182 99.00000% : 12791.364us 00:08:31.182 99.50000% : 44638.175us 00:08:31.182 99.90000% : 50954.898us 00:08:31.182 99.99000% : 51376.013us 00:08:31.182 99.99900% : 51376.013us 00:08:31.182 99.99990% : 51376.013us 00:08:31.182 99.99999% : 51376.013us 00:08:31.182 00:08:31.182 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:31.182 ================================================================================= 00:08:31.182 1.00000% : 8106.461us 00:08:31.182 10.00000% : 8264.379us 00:08:31.182 25.00000% : 8422.297us 00:08:31.182 50.00000% : 8685.494us 00:08:31.182 75.00000% : 8896.051us 00:08:31.182 90.00000% : 9106.609us 00:08:31.182 95.00000% : 9317.166us 00:08:31.182 98.00000% : 11159.544us 00:08:31.182 99.00000% : 12791.364us 00:08:31.182 99.50000% : 42743.158us 00:08:31.182 99.90000% : 48849.324us 00:08:31.182 99.99000% : 49270.439us 00:08:31.182 99.99900% : 49480.996us 00:08:31.182 99.99990% : 49480.996us 00:08:31.182 99.99999% : 49480.996us 00:08:31.182 00:08:31.182 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:31.182 ================================================================================= 00:08:31.182 1.00000% : 8106.461us 00:08:31.182 10.00000% : 8264.379us 00:08:31.182 25.00000% : 8422.297us 00:08:31.182 50.00000% : 8685.494us 00:08:31.182 75.00000% : 8896.051us 00:08:31.182 90.00000% : 9106.609us 00:08:31.182 95.00000% : 9264.527us 00:08:31.182 98.00000% : 11054.265us 00:08:31.182 99.00000% : 12844.003us 00:08:31.182 99.50000% : 41269.256us 00:08:31.182 99.90000% : 47375.422us 00:08:31.182 99.99000% : 47796.537us 00:08:31.182 99.99900% : 48007.094us 00:08:31.182 99.99990% : 48007.094us 00:08:31.182 99.99999% : 48007.094us 00:08:31.182 00:08:31.182 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:31.182 ================================================================================= 00:08:31.182 1.00000% : 8106.461us 00:08:31.182 10.00000% : 8264.379us 00:08:31.182 25.00000% : 8422.297us 00:08:31.182 50.00000% : 8685.494us 00:08:31.182 75.00000% : 8896.051us 00:08:31.182 90.00000% : 9106.609us 00:08:31.182 95.00000% : 9317.166us 00:08:31.182 98.00000% : 10896.347us 00:08:31.182 99.00000% : 12422.888us 00:08:31.182 99.50000% : 39374.239us 00:08:31.182 99.90000% : 45480.405us 00:08:31.182 99.99000% : 45901.520us 00:08:31.182 99.99900% : 45901.520us 00:08:31.182 99.99990% : 45901.520us 00:08:31.182 99.99999% : 45901.520us 00:08:31.182 00:08:31.182 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:31.182 ================================================================================= 00:08:31.182 1.00000% : 8106.461us 00:08:31.182 10.00000% : 8264.379us 00:08:31.182 25.00000% : 8422.297us 00:08:31.182 50.00000% : 8685.494us 00:08:31.182 75.00000% : 8896.051us 00:08:31.182 90.00000% : 9106.609us 00:08:31.182 95.00000% : 9317.166us 00:08:31.182 98.00000% : 11159.544us 00:08:31.182 99.00000% : 12475.528us 00:08:31.182 99.50000% : 37268.665us 00:08:31.182 99.90000% : 43585.388us 00:08:31.182 99.99000% : 43795.945us 00:08:31.182 99.99900% : 43795.945us 00:08:31.182 99.99990% : 43795.945us 00:08:31.182 99.99999% : 43795.945us 00:08:31.182 00:08:31.182 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:31.182 ================================================================================= 00:08:31.182 1.00000% : 8106.461us 00:08:31.182 10.00000% : 8264.379us 00:08:31.182 25.00000% : 8422.297us 00:08:31.182 50.00000% : 8685.494us 00:08:31.182 75.00000% : 8896.051us 00:08:31.182 90.00000% : 9106.609us 00:08:31.182 95.00000% : 9317.166us 00:08:31.182 98.00000% : 11370.101us 00:08:31.182 99.00000% : 12580.806us 00:08:31.182 99.50000% : 30530.827us 00:08:31.182 99.90000% : 36636.993us 00:08:31.182 99.99000% : 37058.108us 00:08:31.182 99.99900% : 37058.108us 00:08:31.182 99.99990% : 37058.108us 00:08:31.182 99.99999% : 37058.108us 00:08:31.182 00:08:31.182 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:31.182 ============================================================================== 00:08:31.182 Range in us Cumulative IO count 00:08:31.182 7843.264 - 7895.904: 0.0561% ( 8) 00:08:31.182 7895.904 - 7948.543: 0.3363% ( 40) 00:08:31.182 7948.543 - 8001.182: 1.2332% ( 128) 00:08:31.182 8001.182 - 8053.822: 2.6485% ( 202) 00:08:31.182 8053.822 - 8106.461: 5.4442% ( 399) 00:08:31.182 8106.461 - 8159.100: 8.6743% ( 461) 00:08:31.182 8159.100 - 8211.740: 12.3459% ( 524) 00:08:31.182 8211.740 - 8264.379: 16.3397% ( 570) 00:08:31.182 8264.379 - 8317.018: 20.4737% ( 590) 00:08:31.182 8317.018 - 8369.658: 24.9790% ( 643) 00:08:31.182 8369.658 - 8422.297: 29.4773% ( 642) 00:08:31.182 8422.297 - 8474.937: 34.2419% ( 680) 00:08:31.183 8474.937 - 8527.576: 39.1115% ( 695) 00:08:31.183 8527.576 - 8580.215: 44.0653% ( 707) 00:08:31.183 8580.215 - 8632.855: 49.0961% ( 718) 00:08:31.183 8632.855 - 8685.494: 54.1200% ( 717) 00:08:31.183 8685.494 - 8738.133: 59.1788% ( 722) 00:08:31.183 8738.133 - 8790.773: 64.2727% ( 727) 00:08:31.183 8790.773 - 8843.412: 69.3946% ( 731) 00:08:31.183 8843.412 - 8896.051: 74.2853% ( 698) 00:08:31.183 8896.051 - 8948.691: 79.0639% ( 682) 00:08:31.183 8948.691 - 9001.330: 83.3450% ( 611) 00:08:31.183 9001.330 - 9053.969: 86.9184% ( 510) 00:08:31.183 9053.969 - 9106.609: 89.4549% ( 362) 00:08:31.183 9106.609 - 9159.248: 91.2976% ( 263) 00:08:31.183 9159.248 - 9211.888: 92.7060% ( 201) 00:08:31.183 9211.888 - 9264.527: 93.8341% ( 161) 00:08:31.183 9264.527 - 9317.166: 94.8290% ( 142) 00:08:31.183 9317.166 - 9369.806: 95.5788% ( 107) 00:08:31.183 9369.806 - 9422.445: 96.1603% ( 83) 00:08:31.183 9422.445 - 9475.084: 96.6087% ( 64) 00:08:31.183 9475.084 - 9527.724: 96.8540% ( 35) 00:08:31.183 9527.724 - 9580.363: 96.9381% ( 12) 00:08:31.183 9580.363 - 9633.002: 97.0221% ( 12) 00:08:31.183 9633.002 - 9685.642: 97.1132% ( 13) 00:08:31.183 9685.642 - 9738.281: 97.1833% ( 10) 00:08:31.183 9738.281 - 9790.920: 97.2393% ( 8) 00:08:31.183 9790.920 - 9843.560: 97.2814% ( 6) 00:08:31.183 9843.560 - 9896.199: 97.3234% ( 6) 00:08:31.183 9896.199 - 9948.839: 97.3515% ( 4) 00:08:31.183 9948.839 - 10001.478: 97.3935% ( 6) 00:08:31.183 10001.478 - 10054.117: 97.4285% ( 5) 00:08:31.183 10054.117 - 10106.757: 97.5196% ( 13) 00:08:31.183 10106.757 - 10159.396: 97.5547% ( 5) 00:08:31.183 10159.396 - 10212.035: 97.6037% ( 7) 00:08:31.183 10212.035 - 10264.675: 97.6387% ( 5) 00:08:31.183 10264.675 - 10317.314: 97.6948% ( 8) 00:08:31.183 10317.314 - 10369.953: 97.7298% ( 5) 00:08:31.183 10369.953 - 10422.593: 97.7649% ( 5) 00:08:31.183 10422.593 - 10475.232: 97.7929% ( 4) 00:08:31.183 10475.232 - 10527.871: 97.8419% ( 7) 00:08:31.183 10527.871 - 10580.511: 97.8700% ( 4) 00:08:31.183 10580.511 - 10633.150: 97.9120% ( 6) 00:08:31.183 10633.150 - 10685.790: 97.9540% ( 6) 00:08:31.183 10685.790 - 10738.429: 97.9751% ( 3) 00:08:31.183 10738.429 - 10791.068: 97.9821% ( 1) 00:08:31.183 10791.068 - 10843.708: 98.0031% ( 3) 00:08:31.183 10843.708 - 10896.347: 98.0171% ( 2) 00:08:31.183 10896.347 - 10948.986: 98.0381% ( 3) 00:08:31.183 10948.986 - 11001.626: 98.0521% ( 2) 00:08:31.183 11001.626 - 11054.265: 98.0661% ( 2) 00:08:31.183 11054.265 - 11106.904: 98.0872% ( 3) 00:08:31.183 11106.904 - 11159.544: 98.0942% ( 1) 00:08:31.183 11159.544 - 11212.183: 98.1432% ( 7) 00:08:31.183 11212.183 - 11264.822: 98.1783% ( 5) 00:08:31.183 11264.822 - 11317.462: 98.2063% ( 4) 00:08:31.183 11317.462 - 11370.101: 98.2483% ( 6) 00:08:31.183 11370.101 - 11422.741: 98.2834% ( 5) 00:08:31.183 11422.741 - 11475.380: 98.3184% ( 5) 00:08:31.183 11475.380 - 11528.019: 98.3534% ( 5) 00:08:31.183 11528.019 - 11580.659: 98.3744% ( 3) 00:08:31.183 11580.659 - 11633.298: 98.3885% ( 2) 00:08:31.183 11633.298 - 11685.937: 98.4165% ( 4) 00:08:31.183 11685.937 - 11738.577: 98.4445% ( 4) 00:08:31.183 11738.577 - 11791.216: 98.4655% ( 3) 00:08:31.183 11791.216 - 11843.855: 98.4865% ( 3) 00:08:31.183 11843.855 - 11896.495: 98.5146% ( 4) 00:08:31.183 11896.495 - 11949.134: 98.5776% ( 9) 00:08:31.183 11949.134 - 12001.773: 98.6127% ( 5) 00:08:31.183 12001.773 - 12054.413: 98.6477% ( 5) 00:08:31.183 12054.413 - 12107.052: 98.7038% ( 8) 00:08:31.183 12107.052 - 12159.692: 98.7458% ( 6) 00:08:31.183 12159.692 - 12212.331: 98.7738% ( 4) 00:08:31.183 12212.331 - 12264.970: 98.8018% ( 4) 00:08:31.183 12264.970 - 12317.610: 98.8159% ( 2) 00:08:31.183 12317.610 - 12370.249: 98.8369% ( 3) 00:08:31.183 12370.249 - 12422.888: 98.8649% ( 4) 00:08:31.183 12422.888 - 12475.528: 98.8859% ( 3) 00:08:31.183 12475.528 - 12528.167: 98.9140% ( 4) 00:08:31.183 12528.167 - 12580.806: 98.9280% ( 2) 00:08:31.183 12580.806 - 12633.446: 98.9490% ( 3) 00:08:31.183 12633.446 - 12686.085: 98.9840% ( 5) 00:08:31.183 12686.085 - 12738.724: 98.9910% ( 1) 00:08:31.183 12738.724 - 12791.364: 99.0191% ( 4) 00:08:31.183 12791.364 - 12844.003: 99.0401% ( 3) 00:08:31.183 12844.003 - 12896.643: 99.0611% ( 3) 00:08:31.183 12896.643 - 12949.282: 99.0821% ( 3) 00:08:31.183 12949.282 - 13001.921: 99.1031% ( 3) 00:08:31.183 42743.158 - 42953.716: 99.1312% ( 4) 00:08:31.183 42953.716 - 43164.273: 99.1872% ( 8) 00:08:31.183 43164.273 - 43374.831: 99.2293% ( 6) 00:08:31.183 43374.831 - 43585.388: 99.2923% ( 9) 00:08:31.183 43585.388 - 43795.945: 99.3414% ( 7) 00:08:31.183 43795.945 - 44006.503: 99.3974% ( 8) 00:08:31.183 44006.503 - 44217.060: 99.4325% ( 5) 00:08:31.183 44217.060 - 44427.618: 99.4815% ( 7) 00:08:31.183 44427.618 - 44638.175: 99.5305% ( 7) 00:08:31.183 44638.175 - 44848.733: 99.5516% ( 3) 00:08:31.183 49270.439 - 49480.996: 99.5726% ( 3) 00:08:31.183 49480.996 - 49691.553: 99.6216% ( 7) 00:08:31.183 49691.553 - 49902.111: 99.6707% ( 7) 00:08:31.183 49902.111 - 50112.668: 99.7267% ( 8) 00:08:31.183 50112.668 - 50323.226: 99.7758% ( 7) 00:08:31.183 50323.226 - 50533.783: 99.8248% ( 7) 00:08:31.183 50533.783 - 50744.341: 99.8599% ( 5) 00:08:31.183 50744.341 - 50954.898: 99.9159% ( 8) 00:08:31.183 50954.898 - 51165.455: 99.9720% ( 8) 00:08:31.183 51165.455 - 51376.013: 100.0000% ( 4) 00:08:31.183 00:08:31.183 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:31.183 ============================================================================== 00:08:31.183 Range in us Cumulative IO count 00:08:31.183 7790.625 - 7843.264: 0.0070% ( 1) 00:08:31.183 7843.264 - 7895.904: 0.0420% ( 5) 00:08:31.183 7895.904 - 7948.543: 0.0701% ( 4) 00:08:31.183 7948.543 - 8001.182: 0.2312% ( 23) 00:08:31.183 8001.182 - 8053.822: 0.6516% ( 60) 00:08:31.183 8053.822 - 8106.461: 1.7517% ( 157) 00:08:31.183 8106.461 - 8159.100: 3.9518% ( 314) 00:08:31.183 8159.100 - 8211.740: 7.5182% ( 509) 00:08:31.183 8211.740 - 8264.379: 12.2127% ( 670) 00:08:31.183 8264.379 - 8317.018: 16.7601% ( 649) 00:08:31.183 8317.018 - 8369.658: 21.6928% ( 704) 00:08:31.183 8369.658 - 8422.297: 26.9759% ( 754) 00:08:31.183 8422.297 - 8474.937: 32.3010% ( 760) 00:08:31.183 8474.937 - 8527.576: 37.9344% ( 804) 00:08:31.183 8527.576 - 8580.215: 43.5048% ( 795) 00:08:31.183 8580.215 - 8632.855: 49.3484% ( 834) 00:08:31.183 8632.855 - 8685.494: 55.2971% ( 849) 00:08:31.183 8685.494 - 8738.133: 61.3859% ( 869) 00:08:31.183 8738.133 - 8790.773: 67.4467% ( 865) 00:08:31.183 8790.773 - 8843.412: 73.3254% ( 839) 00:08:31.183 8843.412 - 8896.051: 78.7626% ( 776) 00:08:31.183 8896.051 - 8948.691: 83.2820% ( 645) 00:08:31.183 8948.691 - 9001.330: 86.8203% ( 505) 00:08:31.183 9001.330 - 9053.969: 89.2307% ( 344) 00:08:31.183 9053.969 - 9106.609: 91.0104% ( 254) 00:08:31.183 9106.609 - 9159.248: 92.4748% ( 209) 00:08:31.183 9159.248 - 9211.888: 93.6869% ( 173) 00:08:31.183 9211.888 - 9264.527: 94.6469% ( 137) 00:08:31.183 9264.527 - 9317.166: 95.4246% ( 111) 00:08:31.183 9317.166 - 9369.806: 95.9711% ( 78) 00:08:31.183 9369.806 - 9422.445: 96.3145% ( 49) 00:08:31.183 9422.445 - 9475.084: 96.5877% ( 39) 00:08:31.183 9475.084 - 9527.724: 96.7349% ( 21) 00:08:31.183 9527.724 - 9580.363: 96.8610% ( 18) 00:08:31.183 9580.363 - 9633.002: 96.9591% ( 14) 00:08:31.183 9633.002 - 9685.642: 97.0502% ( 13) 00:08:31.183 9685.642 - 9738.281: 97.1413% ( 13) 00:08:31.183 9738.281 - 9790.920: 97.2393% ( 14) 00:08:31.183 9790.920 - 9843.560: 97.3374% ( 14) 00:08:31.183 9843.560 - 9896.199: 97.4425% ( 15) 00:08:31.183 9896.199 - 9948.839: 97.5336% ( 13) 00:08:31.183 9948.839 - 10001.478: 97.5757% ( 6) 00:08:31.183 10001.478 - 10054.117: 97.6177% ( 6) 00:08:31.183 10054.117 - 10106.757: 97.6598% ( 6) 00:08:31.183 10106.757 - 10159.396: 97.6808% ( 3) 00:08:31.183 10159.396 - 10212.035: 97.7088% ( 4) 00:08:31.183 10212.035 - 10264.675: 97.7228% ( 2) 00:08:31.183 10264.675 - 10317.314: 97.7508% ( 4) 00:08:31.183 10317.314 - 10369.953: 97.7578% ( 1) 00:08:31.183 10422.593 - 10475.232: 97.7649% ( 1) 00:08:31.183 10475.232 - 10527.871: 97.7789% ( 2) 00:08:31.183 10527.871 - 10580.511: 97.7999% ( 3) 00:08:31.183 10580.511 - 10633.150: 97.8209% ( 3) 00:08:31.183 10633.150 - 10685.790: 97.8349% ( 2) 00:08:31.183 10685.790 - 10738.429: 97.8559% ( 3) 00:08:31.183 10738.429 - 10791.068: 97.8840% ( 4) 00:08:31.183 10791.068 - 10843.708: 97.8980% ( 2) 00:08:31.183 10843.708 - 10896.347: 97.9120% ( 2) 00:08:31.183 10896.347 - 10948.986: 97.9330% ( 3) 00:08:31.183 10948.986 - 11001.626: 97.9470% ( 2) 00:08:31.183 11001.626 - 11054.265: 97.9610% ( 2) 00:08:31.183 11054.265 - 11106.904: 97.9821% ( 3) 00:08:31.183 11106.904 - 11159.544: 98.0031% ( 3) 00:08:31.183 11159.544 - 11212.183: 98.0171% ( 2) 00:08:31.183 11212.183 - 11264.822: 98.0381% ( 3) 00:08:31.183 11264.822 - 11317.462: 98.0521% ( 2) 00:08:31.183 11317.462 - 11370.101: 98.0661% ( 2) 00:08:31.183 11370.101 - 11422.741: 98.0872% ( 3) 00:08:31.183 11422.741 - 11475.380: 98.1712% ( 12) 00:08:31.183 11475.380 - 11528.019: 98.1993% ( 4) 00:08:31.183 11528.019 - 11580.659: 98.2413% ( 6) 00:08:31.183 11580.659 - 11633.298: 98.2974% ( 8) 00:08:31.183 11633.298 - 11685.937: 98.3464% ( 7) 00:08:31.183 11685.937 - 11738.577: 98.3885% ( 6) 00:08:31.183 11738.577 - 11791.216: 98.4305% ( 6) 00:08:31.183 11791.216 - 11843.855: 98.4585% ( 4) 00:08:31.183 11843.855 - 11896.495: 98.4865% ( 4) 00:08:31.183 11896.495 - 11949.134: 98.5076% ( 3) 00:08:31.183 11949.134 - 12001.773: 98.5356% ( 4) 00:08:31.183 12001.773 - 12054.413: 98.5846% ( 7) 00:08:31.183 12054.413 - 12107.052: 98.6267% ( 6) 00:08:31.183 12107.052 - 12159.692: 98.6757% ( 7) 00:08:31.183 12159.692 - 12212.331: 98.7318% ( 8) 00:08:31.183 12212.331 - 12264.970: 98.7528% ( 3) 00:08:31.183 12264.970 - 12317.610: 98.7878% ( 5) 00:08:31.183 12317.610 - 12370.249: 98.8159% ( 4) 00:08:31.184 12370.249 - 12422.888: 98.8369% ( 3) 00:08:31.184 12422.888 - 12475.528: 98.8649% ( 4) 00:08:31.184 12475.528 - 12528.167: 98.8859% ( 3) 00:08:31.184 12528.167 - 12580.806: 98.9140% ( 4) 00:08:31.184 12580.806 - 12633.446: 98.9420% ( 4) 00:08:31.184 12633.446 - 12686.085: 98.9630% ( 3) 00:08:31.184 12686.085 - 12738.724: 98.9910% ( 4) 00:08:31.184 12738.724 - 12791.364: 99.0121% ( 3) 00:08:31.184 12791.364 - 12844.003: 99.0401% ( 4) 00:08:31.184 12844.003 - 12896.643: 99.0681% ( 4) 00:08:31.184 12896.643 - 12949.282: 99.0961% ( 4) 00:08:31.184 12949.282 - 13001.921: 99.1031% ( 1) 00:08:31.184 40848.141 - 41058.699: 99.1101% ( 1) 00:08:31.184 41058.699 - 41269.256: 99.1592% ( 7) 00:08:31.184 41269.256 - 41479.814: 99.2012% ( 6) 00:08:31.184 41479.814 - 41690.371: 99.2503% ( 7) 00:08:31.184 41690.371 - 41900.929: 99.3063% ( 8) 00:08:31.184 41900.929 - 42111.486: 99.3624% ( 8) 00:08:31.184 42111.486 - 42322.043: 99.4044% ( 6) 00:08:31.184 42322.043 - 42532.601: 99.4675% ( 9) 00:08:31.184 42532.601 - 42743.158: 99.5165% ( 7) 00:08:31.184 42743.158 - 42953.716: 99.5516% ( 5) 00:08:31.184 47375.422 - 47585.979: 99.5796% ( 4) 00:08:31.184 47585.979 - 47796.537: 99.6357% ( 8) 00:08:31.184 47796.537 - 48007.094: 99.6777% ( 6) 00:08:31.184 48007.094 - 48217.651: 99.7267% ( 7) 00:08:31.184 48217.651 - 48428.209: 99.7828% ( 8) 00:08:31.184 48428.209 - 48638.766: 99.8388% ( 8) 00:08:31.184 48638.766 - 48849.324: 99.9019% ( 9) 00:08:31.184 48849.324 - 49059.881: 99.9510% ( 7) 00:08:31.184 49059.881 - 49270.439: 99.9930% ( 6) 00:08:31.184 49270.439 - 49480.996: 100.0000% ( 1) 00:08:31.184 00:08:31.184 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:31.184 ============================================================================== 00:08:31.184 Range in us Cumulative IO count 00:08:31.184 7948.543 - 8001.182: 0.0561% ( 8) 00:08:31.184 8001.182 - 8053.822: 0.5535% ( 71) 00:08:31.184 8053.822 - 8106.461: 1.6676% ( 159) 00:08:31.184 8106.461 - 8159.100: 4.0219% ( 336) 00:08:31.184 8159.100 - 8211.740: 7.5953% ( 510) 00:08:31.184 8211.740 - 8264.379: 11.6802% ( 583) 00:08:31.184 8264.379 - 8317.018: 16.3327% ( 664) 00:08:31.184 8317.018 - 8369.658: 21.1463% ( 687) 00:08:31.184 8369.658 - 8422.297: 26.4784% ( 761) 00:08:31.184 8422.297 - 8474.937: 31.7685% ( 755) 00:08:31.184 8474.937 - 8527.576: 37.5631% ( 827) 00:08:31.184 8527.576 - 8580.215: 43.4487% ( 840) 00:08:31.184 8580.215 - 8632.855: 49.3274% ( 839) 00:08:31.184 8632.855 - 8685.494: 55.3952% ( 866) 00:08:31.184 8685.494 - 8738.133: 61.5121% ( 873) 00:08:31.184 8738.133 - 8790.773: 67.6079% ( 870) 00:08:31.184 8790.773 - 8843.412: 73.4865% ( 839) 00:08:31.184 8843.412 - 8896.051: 78.9798% ( 784) 00:08:31.184 8896.051 - 8948.691: 83.5762% ( 656) 00:08:31.184 8948.691 - 9001.330: 87.0305% ( 493) 00:08:31.184 9001.330 - 9053.969: 89.5600% ( 361) 00:08:31.184 9053.969 - 9106.609: 91.4308% ( 267) 00:08:31.184 9106.609 - 9159.248: 92.9933% ( 223) 00:08:31.184 9159.248 - 9211.888: 94.2965% ( 186) 00:08:31.184 9211.888 - 9264.527: 95.3055% ( 144) 00:08:31.184 9264.527 - 9317.166: 95.9781% ( 96) 00:08:31.184 9317.166 - 9369.806: 96.5177% ( 77) 00:08:31.184 9369.806 - 9422.445: 96.8680% ( 50) 00:08:31.184 9422.445 - 9475.084: 97.0502% ( 26) 00:08:31.184 9475.084 - 9527.724: 97.1483% ( 14) 00:08:31.184 9527.724 - 9580.363: 97.2043% ( 8) 00:08:31.184 9580.363 - 9633.002: 97.2814% ( 11) 00:08:31.184 9633.002 - 9685.642: 97.3445% ( 9) 00:08:31.184 9685.642 - 9738.281: 97.4215% ( 11) 00:08:31.184 9738.281 - 9790.920: 97.4916% ( 10) 00:08:31.184 9790.920 - 9843.560: 97.5476% ( 8) 00:08:31.184 9843.560 - 9896.199: 97.6107% ( 9) 00:08:31.184 9896.199 - 9948.839: 97.6598% ( 7) 00:08:31.184 9948.839 - 10001.478: 97.7018% ( 6) 00:08:31.184 10001.478 - 10054.117: 97.7438% ( 6) 00:08:31.184 10054.117 - 10106.757: 97.7578% ( 2) 00:08:31.184 10317.314 - 10369.953: 97.7719% ( 2) 00:08:31.184 10369.953 - 10422.593: 97.7999% ( 4) 00:08:31.184 10422.593 - 10475.232: 97.8279% ( 4) 00:08:31.184 10475.232 - 10527.871: 97.8489% ( 3) 00:08:31.184 10527.871 - 10580.511: 97.8559% ( 1) 00:08:31.184 10580.511 - 10633.150: 97.8700% ( 2) 00:08:31.184 10633.150 - 10685.790: 97.8910% ( 3) 00:08:31.184 10685.790 - 10738.429: 97.9050% ( 2) 00:08:31.184 10738.429 - 10791.068: 97.9260% ( 3) 00:08:31.184 10791.068 - 10843.708: 97.9400% ( 2) 00:08:31.184 10843.708 - 10896.347: 97.9540% ( 2) 00:08:31.184 10896.347 - 10948.986: 97.9751% ( 3) 00:08:31.184 10948.986 - 11001.626: 97.9891% ( 2) 00:08:31.184 11001.626 - 11054.265: 98.0101% ( 3) 00:08:31.184 11054.265 - 11106.904: 98.0241% ( 2) 00:08:31.184 11106.904 - 11159.544: 98.0451% ( 3) 00:08:31.184 11159.544 - 11212.183: 98.0661% ( 3) 00:08:31.184 11212.183 - 11264.822: 98.0802% ( 2) 00:08:31.184 11264.822 - 11317.462: 98.1012% ( 3) 00:08:31.184 11317.462 - 11370.101: 98.1222% ( 3) 00:08:31.184 11370.101 - 11422.741: 98.1362% ( 2) 00:08:31.184 11422.741 - 11475.380: 98.1502% ( 2) 00:08:31.184 11475.380 - 11528.019: 98.1712% ( 3) 00:08:31.184 11528.019 - 11580.659: 98.1993% ( 4) 00:08:31.184 11580.659 - 11633.298: 98.2413% ( 6) 00:08:31.184 11633.298 - 11685.937: 98.2623% ( 3) 00:08:31.184 11685.937 - 11738.577: 98.2904% ( 4) 00:08:31.184 11738.577 - 11791.216: 98.3184% ( 4) 00:08:31.184 11791.216 - 11843.855: 98.3464% ( 4) 00:08:31.184 11843.855 - 11896.495: 98.3674% ( 3) 00:08:31.184 11896.495 - 11949.134: 98.3885% ( 3) 00:08:31.184 11949.134 - 12001.773: 98.4165% ( 4) 00:08:31.184 12001.773 - 12054.413: 98.4445% ( 4) 00:08:31.184 12054.413 - 12107.052: 98.4725% ( 4) 00:08:31.184 12107.052 - 12159.692: 98.5216% ( 7) 00:08:31.184 12159.692 - 12212.331: 98.5706% ( 7) 00:08:31.184 12212.331 - 12264.970: 98.6337% ( 9) 00:08:31.184 12264.970 - 12317.610: 98.6757% ( 6) 00:08:31.184 12317.610 - 12370.249: 98.7248% ( 7) 00:08:31.184 12370.249 - 12422.888: 98.7808% ( 8) 00:08:31.184 12422.888 - 12475.528: 98.8299% ( 7) 00:08:31.184 12475.528 - 12528.167: 98.8579% ( 4) 00:08:31.184 12528.167 - 12580.806: 98.8789% ( 3) 00:08:31.184 12580.806 - 12633.446: 98.9070% ( 4) 00:08:31.184 12633.446 - 12686.085: 98.9350% ( 4) 00:08:31.184 12686.085 - 12738.724: 98.9560% ( 3) 00:08:31.184 12738.724 - 12791.364: 98.9840% ( 4) 00:08:31.184 12791.364 - 12844.003: 99.0121% ( 4) 00:08:31.184 12844.003 - 12896.643: 99.0401% ( 4) 00:08:31.184 12896.643 - 12949.282: 99.0611% ( 3) 00:08:31.184 12949.282 - 13001.921: 99.0891% ( 4) 00:08:31.184 13001.921 - 13054.561: 99.1031% ( 2) 00:08:31.184 39584.797 - 39795.354: 99.1522% ( 7) 00:08:31.184 39795.354 - 40005.912: 99.2082% ( 8) 00:08:31.184 40005.912 - 40216.469: 99.2643% ( 8) 00:08:31.184 40216.469 - 40427.027: 99.3203% ( 8) 00:08:31.184 40427.027 - 40637.584: 99.3764% ( 8) 00:08:31.184 40637.584 - 40848.141: 99.4254% ( 7) 00:08:31.184 40848.141 - 41058.699: 99.4815% ( 8) 00:08:31.184 41058.699 - 41269.256: 99.5305% ( 7) 00:08:31.184 41269.256 - 41479.814: 99.5516% ( 3) 00:08:31.184 45901.520 - 46112.077: 99.5936% ( 6) 00:08:31.184 46112.077 - 46322.635: 99.6357% ( 6) 00:08:31.184 46322.635 - 46533.192: 99.6847% ( 7) 00:08:31.184 46533.192 - 46743.749: 99.7337% ( 7) 00:08:31.184 46743.749 - 46954.307: 99.7898% ( 8) 00:08:31.184 46954.307 - 47164.864: 99.8459% ( 8) 00:08:31.184 47164.864 - 47375.422: 99.9019% ( 8) 00:08:31.184 47375.422 - 47585.979: 99.9510% ( 7) 00:08:31.184 47585.979 - 47796.537: 99.9930% ( 6) 00:08:31.184 47796.537 - 48007.094: 100.0000% ( 1) 00:08:31.184 00:08:31.184 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:31.184 ============================================================================== 00:08:31.184 Range in us Cumulative IO count 00:08:31.184 7948.543 - 8001.182: 0.0981% ( 14) 00:08:31.184 8001.182 - 8053.822: 0.5675% ( 67) 00:08:31.184 8053.822 - 8106.461: 1.9829% ( 202) 00:08:31.184 8106.461 - 8159.100: 4.5894% ( 372) 00:08:31.184 8159.100 - 8211.740: 8.0788% ( 498) 00:08:31.184 8211.740 - 8264.379: 12.2197% ( 591) 00:08:31.184 8264.379 - 8317.018: 16.7461% ( 646) 00:08:31.184 8317.018 - 8369.658: 21.6998% ( 707) 00:08:31.184 8369.658 - 8422.297: 26.8638% ( 737) 00:08:31.184 8422.297 - 8474.937: 32.1959% ( 761) 00:08:31.184 8474.937 - 8527.576: 37.5981% ( 771) 00:08:31.184 8527.576 - 8580.215: 43.3646% ( 823) 00:08:31.184 8580.215 - 8632.855: 49.0891% ( 817) 00:08:31.184 8632.855 - 8685.494: 54.9608% ( 838) 00:08:31.184 8685.494 - 8738.133: 60.9515% ( 855) 00:08:31.184 8738.133 - 8790.773: 66.9283% ( 853) 00:08:31.184 8790.773 - 8843.412: 72.8069% ( 839) 00:08:31.184 8843.412 - 8896.051: 78.2231% ( 773) 00:08:31.184 8896.051 - 8948.691: 82.8265% ( 657) 00:08:31.184 8948.691 - 9001.330: 86.2108% ( 483) 00:08:31.184 9001.330 - 9053.969: 88.7682% ( 365) 00:08:31.184 9053.969 - 9106.609: 90.7862% ( 288) 00:08:31.184 9106.609 - 9159.248: 92.3837% ( 228) 00:08:31.184 9159.248 - 9211.888: 93.5959% ( 173) 00:08:31.184 9211.888 - 9264.527: 94.6539% ( 151) 00:08:31.184 9264.527 - 9317.166: 95.4106% ( 108) 00:08:31.184 9317.166 - 9369.806: 95.9641% ( 79) 00:08:31.184 9369.806 - 9422.445: 96.2584% ( 42) 00:08:31.184 9422.445 - 9475.084: 96.5036% ( 35) 00:08:31.184 9475.084 - 9527.724: 96.6648% ( 23) 00:08:31.184 9527.724 - 9580.363: 96.7419% ( 11) 00:08:31.184 9580.363 - 9633.002: 96.8189% ( 11) 00:08:31.184 9633.002 - 9685.642: 96.8820% ( 9) 00:08:31.184 9685.642 - 9738.281: 96.9381% ( 8) 00:08:31.184 9738.281 - 9790.920: 97.0081% ( 10) 00:08:31.184 9790.920 - 9843.560: 97.0782% ( 10) 00:08:31.184 9843.560 - 9896.199: 97.1483% ( 10) 00:08:31.184 9896.199 - 9948.839: 97.2043% ( 8) 00:08:31.184 9948.839 - 10001.478: 97.2744% ( 10) 00:08:31.184 10001.478 - 10054.117: 97.3234% ( 7) 00:08:31.184 10054.117 - 10106.757: 97.3585% ( 5) 00:08:31.184 10106.757 - 10159.396: 97.3935% ( 5) 00:08:31.184 10159.396 - 10212.035: 97.4355% ( 6) 00:08:31.184 10212.035 - 10264.675: 97.4776% ( 6) 00:08:31.184 10264.675 - 10317.314: 97.5196% ( 6) 00:08:31.185 10317.314 - 10369.953: 97.5617% ( 6) 00:08:31.185 10369.953 - 10422.593: 97.6037% ( 6) 00:08:31.185 10422.593 - 10475.232: 97.6387% ( 5) 00:08:31.185 10475.232 - 10527.871: 97.6808% ( 6) 00:08:31.185 10527.871 - 10580.511: 97.7158% ( 5) 00:08:31.185 10580.511 - 10633.150: 97.7859% ( 10) 00:08:31.185 10633.150 - 10685.790: 97.8419% ( 8) 00:08:31.185 10685.790 - 10738.429: 97.8840% ( 6) 00:08:31.185 10738.429 - 10791.068: 97.9260% ( 6) 00:08:31.185 10791.068 - 10843.708: 97.9821% ( 8) 00:08:31.185 10843.708 - 10896.347: 98.0311% ( 7) 00:08:31.185 10896.347 - 10948.986: 98.0872% ( 8) 00:08:31.185 10948.986 - 11001.626: 98.1222% ( 5) 00:08:31.185 11001.626 - 11054.265: 98.1642% ( 6) 00:08:31.185 11054.265 - 11106.904: 98.2133% ( 7) 00:08:31.185 11106.904 - 11159.544: 98.2553% ( 6) 00:08:31.185 11159.544 - 11212.183: 98.3114% ( 8) 00:08:31.185 11212.183 - 11264.822: 98.3674% ( 8) 00:08:31.185 11264.822 - 11317.462: 98.4235% ( 8) 00:08:31.185 11317.462 - 11370.101: 98.4725% ( 7) 00:08:31.185 11370.101 - 11422.741: 98.4936% ( 3) 00:08:31.185 11422.741 - 11475.380: 98.5076% ( 2) 00:08:31.185 11475.380 - 11528.019: 98.5216% ( 2) 00:08:31.185 11528.019 - 11580.659: 98.5356% ( 2) 00:08:31.185 11580.659 - 11633.298: 98.5426% ( 1) 00:08:31.185 11633.298 - 11685.937: 98.5916% ( 7) 00:08:31.185 11685.937 - 11738.577: 98.6267% ( 5) 00:08:31.185 11738.577 - 11791.216: 98.6757% ( 7) 00:08:31.185 11791.216 - 11843.855: 98.7108% ( 5) 00:08:31.185 11843.855 - 11896.495: 98.7598% ( 7) 00:08:31.185 11896.495 - 11949.134: 98.7948% ( 5) 00:08:31.185 11949.134 - 12001.773: 98.8229% ( 4) 00:08:31.185 12001.773 - 12054.413: 98.8509% ( 4) 00:08:31.185 12054.413 - 12107.052: 98.8719% ( 3) 00:08:31.185 12107.052 - 12159.692: 98.8999% ( 4) 00:08:31.185 12159.692 - 12212.331: 98.9210% ( 3) 00:08:31.185 12212.331 - 12264.970: 98.9420% ( 3) 00:08:31.185 12264.970 - 12317.610: 98.9700% ( 4) 00:08:31.185 12317.610 - 12370.249: 98.9980% ( 4) 00:08:31.185 12370.249 - 12422.888: 99.0191% ( 3) 00:08:31.185 12422.888 - 12475.528: 99.0471% ( 4) 00:08:31.185 12475.528 - 12528.167: 99.0751% ( 4) 00:08:31.185 12528.167 - 12580.806: 99.1031% ( 4) 00:08:31.185 37479.222 - 37689.780: 99.1242% ( 3) 00:08:31.185 37689.780 - 37900.337: 99.1802% ( 8) 00:08:31.185 37900.337 - 38110.895: 99.2293% ( 7) 00:08:31.185 38110.895 - 38321.452: 99.2783% ( 7) 00:08:31.185 38321.452 - 38532.010: 99.3274% ( 7) 00:08:31.185 38532.010 - 38742.567: 99.3834% ( 8) 00:08:31.185 38742.567 - 38953.124: 99.4325% ( 7) 00:08:31.185 38953.124 - 39163.682: 99.4885% ( 8) 00:08:31.185 39163.682 - 39374.239: 99.5376% ( 7) 00:08:31.185 39374.239 - 39584.797: 99.5516% ( 2) 00:08:31.185 44006.503 - 44217.060: 99.6076% ( 8) 00:08:31.185 44217.060 - 44427.618: 99.6567% ( 7) 00:08:31.185 44427.618 - 44638.175: 99.7127% ( 8) 00:08:31.185 44638.175 - 44848.733: 99.7618% ( 7) 00:08:31.185 44848.733 - 45059.290: 99.8178% ( 8) 00:08:31.185 45059.290 - 45269.847: 99.8739% ( 8) 00:08:31.185 45269.847 - 45480.405: 99.9229% ( 7) 00:08:31.185 45480.405 - 45690.962: 99.9790% ( 8) 00:08:31.185 45690.962 - 45901.520: 100.0000% ( 3) 00:08:31.185 00:08:31.185 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:31.185 ============================================================================== 00:08:31.185 Range in us Cumulative IO count 00:08:31.185 7895.904 - 7948.543: 0.0210% ( 3) 00:08:31.185 7948.543 - 8001.182: 0.0841% ( 9) 00:08:31.185 8001.182 - 8053.822: 0.6376% ( 79) 00:08:31.185 8053.822 - 8106.461: 2.0390% ( 200) 00:08:31.185 8106.461 - 8159.100: 4.4002% ( 337) 00:08:31.185 8159.100 - 8211.740: 8.0157% ( 516) 00:08:31.185 8211.740 - 8264.379: 12.5350% ( 645) 00:08:31.185 8264.379 - 8317.018: 16.9212% ( 626) 00:08:31.185 8317.018 - 8369.658: 21.8890% ( 709) 00:08:31.185 8369.658 - 8422.297: 27.0179% ( 732) 00:08:31.185 8422.297 - 8474.937: 32.2590% ( 748) 00:08:31.185 8474.937 - 8527.576: 37.7803% ( 788) 00:08:31.185 8527.576 - 8580.215: 43.5538% ( 824) 00:08:31.185 8580.215 - 8632.855: 49.3624% ( 829) 00:08:31.185 8632.855 - 8685.494: 55.1850% ( 831) 00:08:31.185 8685.494 - 8738.133: 61.1477% ( 851) 00:08:31.185 8738.133 - 8790.773: 67.0964% ( 849) 00:08:31.185 8790.773 - 8843.412: 72.8840% ( 826) 00:08:31.185 8843.412 - 8896.051: 78.2441% ( 765) 00:08:31.185 8896.051 - 8948.691: 82.8265% ( 654) 00:08:31.185 8948.691 - 9001.330: 86.4140% ( 512) 00:08:31.185 9001.330 - 9053.969: 88.8803% ( 352) 00:08:31.185 9053.969 - 9106.609: 90.7371% ( 265) 00:08:31.185 9106.609 - 9159.248: 92.2646% ( 218) 00:08:31.185 9159.248 - 9211.888: 93.4908% ( 175) 00:08:31.185 9211.888 - 9264.527: 94.5207% ( 147) 00:08:31.185 9264.527 - 9317.166: 95.3055% ( 112) 00:08:31.185 9317.166 - 9369.806: 95.8871% ( 83) 00:08:31.185 9369.806 - 9422.445: 96.2234% ( 48) 00:08:31.185 9422.445 - 9475.084: 96.4686% ( 35) 00:08:31.185 9475.084 - 9527.724: 96.5947% ( 18) 00:08:31.185 9527.724 - 9580.363: 96.7138% ( 17) 00:08:31.185 9580.363 - 9633.002: 96.7769% ( 9) 00:08:31.185 9633.002 - 9685.642: 96.8610% ( 12) 00:08:31.185 9685.642 - 9738.281: 96.9311% ( 10) 00:08:31.185 9738.281 - 9790.920: 97.0011% ( 10) 00:08:31.185 9790.920 - 9843.560: 97.0642% ( 9) 00:08:31.185 9843.560 - 9896.199: 97.1413% ( 11) 00:08:31.185 9896.199 - 9948.839: 97.2043% ( 9) 00:08:31.185 9948.839 - 10001.478: 97.2534% ( 7) 00:08:31.185 10001.478 - 10054.117: 97.2954% ( 6) 00:08:31.185 10054.117 - 10106.757: 97.3374% ( 6) 00:08:31.185 10106.757 - 10159.396: 97.3795% ( 6) 00:08:31.185 10159.396 - 10212.035: 97.4215% ( 6) 00:08:31.185 10212.035 - 10264.675: 97.4706% ( 7) 00:08:31.185 10264.675 - 10317.314: 97.5126% ( 6) 00:08:31.185 10317.314 - 10369.953: 97.5476% ( 5) 00:08:31.185 10369.953 - 10422.593: 97.5897% ( 6) 00:08:31.185 10422.593 - 10475.232: 97.6107% ( 3) 00:08:31.185 10475.232 - 10527.871: 97.6317% ( 3) 00:08:31.185 10527.871 - 10580.511: 97.6457% ( 2) 00:08:31.185 10580.511 - 10633.150: 97.6668% ( 3) 00:08:31.185 10633.150 - 10685.790: 97.6808% ( 2) 00:08:31.185 10685.790 - 10738.429: 97.7018% ( 3) 00:08:31.185 10738.429 - 10791.068: 97.7228% ( 3) 00:08:31.185 10791.068 - 10843.708: 97.7368% ( 2) 00:08:31.185 10843.708 - 10896.347: 97.7649% ( 4) 00:08:31.185 10896.347 - 10948.986: 97.8419% ( 11) 00:08:31.185 10948.986 - 11001.626: 97.8910% ( 7) 00:08:31.185 11001.626 - 11054.265: 97.9330% ( 6) 00:08:31.185 11054.265 - 11106.904: 97.9751% ( 6) 00:08:31.185 11106.904 - 11159.544: 98.0241% ( 7) 00:08:31.185 11159.544 - 11212.183: 98.0661% ( 6) 00:08:31.185 11212.183 - 11264.822: 98.1152% ( 7) 00:08:31.185 11264.822 - 11317.462: 98.1572% ( 6) 00:08:31.185 11317.462 - 11370.101: 98.2133% ( 8) 00:08:31.185 11370.101 - 11422.741: 98.2623% ( 7) 00:08:31.185 11422.741 - 11475.380: 98.2974% ( 5) 00:08:31.185 11475.380 - 11528.019: 98.3464% ( 7) 00:08:31.185 11528.019 - 11580.659: 98.3814% ( 5) 00:08:31.185 11580.659 - 11633.298: 98.4165% ( 5) 00:08:31.185 11633.298 - 11685.937: 98.4655% ( 7) 00:08:31.185 11685.937 - 11738.577: 98.4795% ( 2) 00:08:31.185 11738.577 - 11791.216: 98.5006% ( 3) 00:08:31.185 11791.216 - 11843.855: 98.5216% ( 3) 00:08:31.185 11843.855 - 11896.495: 98.6337% ( 16) 00:08:31.185 11896.495 - 11949.134: 98.6757% ( 6) 00:08:31.185 11949.134 - 12001.773: 98.7108% ( 5) 00:08:31.185 12001.773 - 12054.413: 98.7388% ( 4) 00:08:31.185 12054.413 - 12107.052: 98.7738% ( 5) 00:08:31.185 12107.052 - 12159.692: 98.8229% ( 7) 00:08:31.185 12159.692 - 12212.331: 98.8649% ( 6) 00:08:31.185 12212.331 - 12264.970: 98.9070% ( 6) 00:08:31.185 12264.970 - 12317.610: 98.9350% ( 4) 00:08:31.185 12317.610 - 12370.249: 98.9560% ( 3) 00:08:31.185 12370.249 - 12422.888: 98.9840% ( 4) 00:08:31.185 12422.888 - 12475.528: 99.0121% ( 4) 00:08:31.185 12475.528 - 12528.167: 99.0401% ( 4) 00:08:31.185 12528.167 - 12580.806: 99.0611% ( 3) 00:08:31.185 12580.806 - 12633.446: 99.0891% ( 4) 00:08:31.185 12633.446 - 12686.085: 99.1031% ( 2) 00:08:31.185 35584.206 - 35794.763: 99.1382% ( 5) 00:08:31.185 35794.763 - 36005.320: 99.1872% ( 7) 00:08:31.185 36005.320 - 36215.878: 99.2433% ( 8) 00:08:31.185 36215.878 - 36426.435: 99.2993% ( 8) 00:08:31.185 36426.435 - 36636.993: 99.3484% ( 7) 00:08:31.185 36636.993 - 36847.550: 99.3974% ( 7) 00:08:31.185 36847.550 - 37058.108: 99.4535% ( 8) 00:08:31.185 37058.108 - 37268.665: 99.5095% ( 8) 00:08:31.185 37268.665 - 37479.222: 99.5516% ( 6) 00:08:31.185 41900.929 - 42111.486: 99.5726% ( 3) 00:08:31.185 42111.486 - 42322.043: 99.6216% ( 7) 00:08:31.185 42322.043 - 42532.601: 99.6777% ( 8) 00:08:31.185 42532.601 - 42743.158: 99.7337% ( 8) 00:08:31.185 42743.158 - 42953.716: 99.7828% ( 7) 00:08:31.185 42953.716 - 43164.273: 99.8388% ( 8) 00:08:31.185 43164.273 - 43374.831: 99.8949% ( 8) 00:08:31.185 43374.831 - 43585.388: 99.9510% ( 8) 00:08:31.185 43585.388 - 43795.945: 100.0000% ( 7) 00:08:31.185 00:08:31.185 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:31.185 ============================================================================== 00:08:31.185 Range in us Cumulative IO count 00:08:31.185 7948.543 - 8001.182: 0.0488% ( 7) 00:08:31.185 8001.182 - 8053.822: 0.6278% ( 83) 00:08:31.185 8053.822 - 8106.461: 2.1484% ( 218) 00:08:31.185 8106.461 - 8159.100: 4.3945% ( 322) 00:08:31.185 8159.100 - 8211.740: 8.1543% ( 539) 00:08:31.185 8211.740 - 8264.379: 12.4442% ( 615) 00:08:31.185 8264.379 - 8317.018: 16.9992% ( 653) 00:08:31.185 8317.018 - 8369.658: 21.8680% ( 698) 00:08:31.185 8369.658 - 8422.297: 26.9322% ( 726) 00:08:31.185 8422.297 - 8474.937: 32.0592% ( 735) 00:08:31.185 8474.937 - 8527.576: 37.3117% ( 753) 00:08:31.185 8527.576 - 8580.215: 43.1571% ( 838) 00:08:31.185 8580.215 - 8632.855: 49.0165% ( 840) 00:08:31.185 8632.855 - 8685.494: 54.9177% ( 846) 00:08:31.185 8685.494 - 8738.133: 60.8538% ( 851) 00:08:31.185 8738.133 - 8790.773: 66.7690% ( 848) 00:08:31.185 8790.773 - 8843.412: 72.6004% ( 836) 00:08:31.186 8843.412 - 8896.051: 77.9088% ( 761) 00:08:31.186 8896.051 - 8948.691: 82.4986% ( 658) 00:08:31.186 8948.691 - 9001.330: 86.0840% ( 514) 00:08:31.186 9001.330 - 9053.969: 88.6509% ( 368) 00:08:31.186 9053.969 - 9106.609: 90.6390% ( 285) 00:08:31.186 9106.609 - 9159.248: 92.1456% ( 216) 00:08:31.186 9159.248 - 9211.888: 93.3454% ( 172) 00:08:31.186 9211.888 - 9264.527: 94.3290% ( 141) 00:08:31.186 9264.527 - 9317.166: 95.1521% ( 118) 00:08:31.186 9317.166 - 9369.806: 95.7450% ( 85) 00:08:31.186 9369.806 - 9422.445: 96.1356% ( 56) 00:08:31.186 9422.445 - 9475.084: 96.2891% ( 22) 00:08:31.186 9475.084 - 9527.724: 96.3797% ( 13) 00:08:31.186 9527.724 - 9580.363: 96.4355% ( 8) 00:08:31.186 9580.363 - 9633.002: 96.5053% ( 10) 00:08:31.186 9633.002 - 9685.642: 96.5960% ( 13) 00:08:31.186 9685.642 - 9738.281: 96.6867% ( 13) 00:08:31.186 9738.281 - 9790.920: 96.7704% ( 12) 00:08:31.186 9790.920 - 9843.560: 96.8262% ( 8) 00:08:31.186 9843.560 - 9896.199: 96.8750% ( 7) 00:08:31.186 9896.199 - 9948.839: 96.9378% ( 9) 00:08:31.186 9948.839 - 10001.478: 97.0006% ( 9) 00:08:31.186 10001.478 - 10054.117: 97.0773% ( 11) 00:08:31.186 10054.117 - 10106.757: 97.1610% ( 12) 00:08:31.186 10106.757 - 10159.396: 97.2307% ( 10) 00:08:31.186 10159.396 - 10212.035: 97.2935% ( 9) 00:08:31.186 10212.035 - 10264.675: 97.3772% ( 12) 00:08:31.186 10264.675 - 10317.314: 97.4330% ( 8) 00:08:31.186 10317.314 - 10369.953: 97.4819% ( 7) 00:08:31.186 10369.953 - 10422.593: 97.5028% ( 3) 00:08:31.186 10422.593 - 10475.232: 97.5098% ( 1) 00:08:31.186 10475.232 - 10527.871: 97.5307% ( 3) 00:08:31.186 10527.871 - 10580.511: 97.5446% ( 2) 00:08:31.186 10580.511 - 10633.150: 97.5586% ( 2) 00:08:31.186 10633.150 - 10685.790: 97.5865% ( 4) 00:08:31.186 10685.790 - 10738.429: 97.6074% ( 3) 00:08:31.186 10738.429 - 10791.068: 97.6283% ( 3) 00:08:31.186 10791.068 - 10843.708: 97.6423% ( 2) 00:08:31.186 10843.708 - 10896.347: 97.6632% ( 3) 00:08:31.186 10896.347 - 10948.986: 97.6842% ( 3) 00:08:31.186 10948.986 - 11001.626: 97.7260% ( 6) 00:08:31.186 11001.626 - 11054.265: 97.7958% ( 10) 00:08:31.186 11054.265 - 11106.904: 97.8376% ( 6) 00:08:31.186 11106.904 - 11159.544: 97.8655% ( 4) 00:08:31.186 11159.544 - 11212.183: 97.9074% ( 6) 00:08:31.186 11212.183 - 11264.822: 97.9422% ( 5) 00:08:31.186 11264.822 - 11317.462: 97.9911% ( 7) 00:08:31.186 11317.462 - 11370.101: 98.0329% ( 6) 00:08:31.186 11370.101 - 11422.741: 98.0748% ( 6) 00:08:31.186 11422.741 - 11475.380: 98.1166% ( 6) 00:08:31.186 11475.380 - 11528.019: 98.1445% ( 4) 00:08:31.186 11528.019 - 11580.659: 98.1864% ( 6) 00:08:31.186 11580.659 - 11633.298: 98.2282% ( 6) 00:08:31.186 11633.298 - 11685.937: 98.2840% ( 8) 00:08:31.186 11685.937 - 11738.577: 98.3259% ( 6) 00:08:31.186 11738.577 - 11791.216: 98.3747% ( 7) 00:08:31.186 11791.216 - 11843.855: 98.4515% ( 11) 00:08:31.186 11843.855 - 11896.495: 98.5003% ( 7) 00:08:31.186 11896.495 - 11949.134: 98.5352% ( 5) 00:08:31.186 11949.134 - 12001.773: 98.5700% ( 5) 00:08:31.186 12001.773 - 12054.413: 98.6119% ( 6) 00:08:31.186 12054.413 - 12107.052: 98.6607% ( 7) 00:08:31.186 12107.052 - 12159.692: 98.6956% ( 5) 00:08:31.186 12159.692 - 12212.331: 98.7374% ( 6) 00:08:31.186 12212.331 - 12264.970: 98.7793% ( 6) 00:08:31.186 12264.970 - 12317.610: 98.8281% ( 7) 00:08:31.186 12317.610 - 12370.249: 98.8700% ( 6) 00:08:31.186 12370.249 - 12422.888: 98.9188% ( 7) 00:08:31.186 12422.888 - 12475.528: 98.9537% ( 5) 00:08:31.186 12475.528 - 12528.167: 98.9955% ( 6) 00:08:31.186 12528.167 - 12580.806: 99.0374% ( 6) 00:08:31.186 12580.806 - 12633.446: 99.0583% ( 3) 00:08:31.186 12633.446 - 12686.085: 99.0862% ( 4) 00:08:31.186 12686.085 - 12738.724: 99.1071% ( 3) 00:08:31.186 28846.368 - 29056.925: 99.1629% ( 8) 00:08:31.186 29056.925 - 29267.483: 99.2188% ( 8) 00:08:31.186 29267.483 - 29478.040: 99.2676% ( 7) 00:08:31.186 29478.040 - 29688.598: 99.3164% ( 7) 00:08:31.186 29688.598 - 29899.155: 99.3792% ( 9) 00:08:31.186 29899.155 - 30109.712: 99.4280% ( 7) 00:08:31.186 30109.712 - 30320.270: 99.4838% ( 8) 00:08:31.186 30320.270 - 30530.827: 99.5396% ( 8) 00:08:31.186 30530.827 - 30741.385: 99.5536% ( 2) 00:08:31.186 34952.533 - 35163.091: 99.5605% ( 1) 00:08:31.186 35163.091 - 35373.648: 99.6164% ( 8) 00:08:31.186 35373.648 - 35584.206: 99.6652% ( 7) 00:08:31.186 35584.206 - 35794.763: 99.7210% ( 8) 00:08:31.186 35794.763 - 36005.320: 99.7768% ( 8) 00:08:31.186 36005.320 - 36215.878: 99.8326% ( 8) 00:08:31.186 36215.878 - 36426.435: 99.8814% ( 7) 00:08:31.186 36426.435 - 36636.993: 99.9302% ( 7) 00:08:31.186 36636.993 - 36847.550: 99.9860% ( 8) 00:08:31.186 36847.550 - 37058.108: 100.0000% ( 2) 00:08:31.186 00:08:31.186 13:49:23 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:32.608 Initializing NVMe Controllers 00:08:32.608 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.608 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.608 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.608 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.608 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:32.608 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:32.608 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:32.608 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:32.608 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:32.608 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:32.608 Initialization complete. Launching workers. 00:08:32.608 ======================================================== 00:08:32.608 Latency(us) 00:08:32.608 Device Information : IOPS MiB/s Average min max 00:08:32.608 PCIE (0000:00:10.0) NSID 1 from core 0: 13136.63 153.94 9768.03 7948.81 40290.14 00:08:32.608 PCIE (0000:00:11.0) NSID 1 from core 0: 13136.63 153.94 9753.40 7815.52 38232.04 00:08:32.608 PCIE (0000:00:13.0) NSID 1 from core 0: 13136.63 153.94 9738.60 8003.83 37019.99 00:08:32.608 PCIE (0000:00:12.0) NSID 1 from core 0: 13136.63 153.94 9724.29 7959.16 35643.23 00:08:32.608 PCIE (0000:00:12.0) NSID 2 from core 0: 13136.63 153.94 9710.40 7987.78 34079.78 00:08:32.608 PCIE (0000:00:12.0) NSID 3 from core 0: 13200.40 154.69 9650.00 8048.04 26547.33 00:08:32.608 ======================================================== 00:08:32.608 Total : 78883.55 924.42 9724.06 7815.52 40290.14 00:08:32.608 00:08:32.608 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:32.608 ================================================================================= 00:08:32.608 1.00000% : 8369.658us 00:08:32.608 10.00000% : 8790.773us 00:08:32.608 25.00000% : 9053.969us 00:08:32.608 50.00000% : 9369.806us 00:08:32.608 75.00000% : 9685.642us 00:08:32.608 90.00000% : 10264.675us 00:08:32.608 95.00000% : 11843.855us 00:08:32.608 98.00000% : 14317.905us 00:08:32.608 99.00000% : 15581.250us 00:08:32.608 99.50000% : 32425.844us 00:08:32.608 99.90000% : 40005.912us 00:08:32.608 99.99000% : 40216.469us 00:08:32.608 99.99900% : 40427.027us 00:08:32.608 99.99990% : 40427.027us 00:08:32.608 99.99999% : 40427.027us 00:08:32.608 00:08:32.608 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:32.608 ================================================================================= 00:08:32.608 1.00000% : 8369.658us 00:08:32.608 10.00000% : 8843.412us 00:08:32.608 25.00000% : 9053.969us 00:08:32.608 50.00000% : 9369.806us 00:08:32.608 75.00000% : 9633.002us 00:08:32.608 90.00000% : 10212.035us 00:08:32.608 95.00000% : 12001.773us 00:08:32.608 98.00000% : 14317.905us 00:08:32.608 99.00000% : 16107.643us 00:08:32.608 99.50000% : 30741.385us 00:08:32.608 99.90000% : 37900.337us 00:08:32.608 99.99000% : 38321.452us 00:08:32.608 99.99900% : 38321.452us 00:08:32.608 99.99990% : 38321.452us 00:08:32.608 99.99999% : 38321.452us 00:08:32.608 00:08:32.608 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:32.608 ================================================================================= 00:08:32.608 1.00000% : 8422.297us 00:08:32.608 10.00000% : 8843.412us 00:08:32.608 25.00000% : 9053.969us 00:08:32.608 50.00000% : 9369.806us 00:08:32.608 75.00000% : 9685.642us 00:08:32.608 90.00000% : 10212.035us 00:08:32.608 95.00000% : 12001.773us 00:08:32.608 98.00000% : 14002.069us 00:08:32.608 99.00000% : 15897.086us 00:08:32.608 99.50000% : 30109.712us 00:08:32.608 99.90000% : 36847.550us 00:08:32.608 99.99000% : 37058.108us 00:08:32.608 99.99900% : 37058.108us 00:08:32.608 99.99990% : 37058.108us 00:08:32.608 99.99999% : 37058.108us 00:08:32.608 00:08:32.608 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:32.608 ================================================================================= 00:08:32.608 1.00000% : 8474.937us 00:08:32.608 10.00000% : 8843.412us 00:08:32.608 25.00000% : 9053.969us 00:08:32.608 50.00000% : 9369.806us 00:08:32.608 75.00000% : 9685.642us 00:08:32.608 90.00000% : 10212.035us 00:08:32.608 95.00000% : 12264.970us 00:08:32.608 98.00000% : 13896.790us 00:08:32.608 99.00000% : 16212.922us 00:08:32.608 99.50000% : 28425.253us 00:08:32.608 99.90000% : 35373.648us 00:08:32.608 99.99000% : 35794.763us 00:08:32.608 99.99900% : 35794.763us 00:08:32.608 99.99990% : 35794.763us 00:08:32.608 99.99999% : 35794.763us 00:08:32.608 00:08:32.608 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:32.608 ================================================================================= 00:08:32.608 1.00000% : 8422.297us 00:08:32.608 10.00000% : 8843.412us 00:08:32.608 25.00000% : 9053.969us 00:08:32.608 50.00000% : 9369.806us 00:08:32.608 75.00000% : 9685.642us 00:08:32.608 90.00000% : 10264.675us 00:08:32.608 95.00000% : 12317.610us 00:08:32.608 98.00000% : 14423.184us 00:08:32.608 99.00000% : 16002.365us 00:08:32.608 99.50000% : 26951.351us 00:08:32.608 99.90000% : 33899.746us 00:08:32.608 99.99000% : 34110.304us 00:08:32.608 99.99900% : 34110.304us 00:08:32.608 99.99990% : 34110.304us 00:08:32.608 99.99999% : 34110.304us 00:08:32.608 00:08:32.608 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:32.608 ================================================================================= 00:08:32.608 1.00000% : 8369.658us 00:08:32.608 10.00000% : 8843.412us 00:08:32.608 25.00000% : 9106.609us 00:08:32.608 50.00000% : 9369.806us 00:08:32.608 75.00000% : 9633.002us 00:08:32.608 90.00000% : 10317.314us 00:08:32.608 95.00000% : 12159.692us 00:08:32.608 98.00000% : 14423.184us 00:08:32.608 99.00000% : 15370.692us 00:08:32.608 99.50000% : 18844.890us 00:08:32.608 99.90000% : 26214.400us 00:08:32.608 99.99000% : 26530.236us 00:08:32.608 99.99900% : 26635.515us 00:08:32.608 99.99990% : 26635.515us 00:08:32.608 99.99999% : 26635.515us 00:08:32.608 00:08:32.608 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:32.608 ============================================================================== 00:08:32.608 Range in us Cumulative IO count 00:08:32.608 7948.543 - 8001.182: 0.0076% ( 1) 00:08:32.608 8001.182 - 8053.822: 0.0228% ( 2) 00:08:32.608 8053.822 - 8106.461: 0.0607% ( 5) 00:08:32.608 8106.461 - 8159.100: 0.1517% ( 12) 00:08:32.608 8159.100 - 8211.740: 0.4248% ( 36) 00:08:32.608 8211.740 - 8264.379: 0.5613% ( 18) 00:08:32.608 8264.379 - 8317.018: 0.8647% ( 40) 00:08:32.608 8317.018 - 8369.658: 1.1908% ( 43) 00:08:32.608 8369.658 - 8422.297: 1.5777% ( 51) 00:08:32.608 8422.297 - 8474.937: 2.2224% ( 85) 00:08:32.608 8474.937 - 8527.576: 3.0492% ( 109) 00:08:32.608 8527.576 - 8580.215: 4.0579% ( 133) 00:08:32.608 8580.215 - 8632.855: 5.3095% ( 165) 00:08:32.608 8632.855 - 8685.494: 6.5913% ( 169) 00:08:32.608 8685.494 - 8738.133: 8.1538% ( 206) 00:08:32.608 8738.133 - 8790.773: 10.4596% ( 304) 00:08:32.608 8790.773 - 8843.412: 13.1144% ( 350) 00:08:32.608 8843.412 - 8896.051: 15.9132% ( 369) 00:08:32.608 8896.051 - 8948.691: 19.3416% ( 452) 00:08:32.608 8948.691 - 9001.330: 22.9900% ( 481) 00:08:32.608 9001.330 - 9053.969: 26.8659% ( 511) 00:08:32.608 9053.969 - 9106.609: 31.0604% ( 553) 00:08:32.608 9106.609 - 9159.248: 35.0576% ( 527) 00:08:32.608 9159.248 - 9211.888: 39.3356% ( 564) 00:08:32.608 9211.888 - 9264.527: 44.1899% ( 640) 00:08:32.608 9264.527 - 9317.166: 48.7257% ( 598) 00:08:32.608 9317.166 - 9369.806: 53.7318% ( 660) 00:08:32.608 9369.806 - 9422.445: 57.9490% ( 556) 00:08:32.609 9422.445 - 9475.084: 62.4014% ( 587) 00:08:32.609 9475.084 - 9527.724: 66.3683% ( 523) 00:08:32.609 9527.724 - 9580.363: 70.2139% ( 507) 00:08:32.609 9580.363 - 9633.002: 73.4527% ( 427) 00:08:32.609 9633.002 - 9685.642: 76.5322% ( 406) 00:08:32.609 9685.642 - 9738.281: 78.8607% ( 307) 00:08:32.609 9738.281 - 9790.920: 80.6963% ( 242) 00:08:32.609 9790.920 - 9843.560: 82.2588% ( 206) 00:08:32.609 9843.560 - 9896.199: 83.5027% ( 164) 00:08:32.609 9896.199 - 9948.839: 84.7239% ( 161) 00:08:32.609 9948.839 - 10001.478: 85.8237% ( 145) 00:08:32.609 10001.478 - 10054.117: 87.0601% ( 163) 00:08:32.609 10054.117 - 10106.757: 88.0158% ( 126) 00:08:32.609 10106.757 - 10159.396: 88.8350% ( 108) 00:08:32.609 10159.396 - 10212.035: 89.6238% ( 104) 00:08:32.609 10212.035 - 10264.675: 90.3823% ( 100) 00:08:32.609 10264.675 - 10317.314: 90.9436% ( 74) 00:08:32.609 10317.314 - 10369.953: 91.3456% ( 53) 00:08:32.609 10369.953 - 10422.593: 91.5959% ( 33) 00:08:32.609 10422.593 - 10475.232: 91.8310% ( 31) 00:08:32.609 10475.232 - 10527.871: 92.0737% ( 32) 00:08:32.609 10527.871 - 10580.511: 92.3392% ( 35) 00:08:32.609 10580.511 - 10633.150: 92.5819% ( 32) 00:08:32.609 10633.150 - 10685.790: 92.7564% ( 23) 00:08:32.609 10685.790 - 10738.429: 92.9612% ( 27) 00:08:32.609 10738.429 - 10791.068: 93.0674% ( 14) 00:08:32.609 10791.068 - 10843.708: 93.1735% ( 14) 00:08:32.609 10843.708 - 10896.347: 93.2873% ( 15) 00:08:32.609 10896.347 - 10948.986: 93.3328% ( 6) 00:08:32.609 10948.986 - 11001.626: 93.3632% ( 4) 00:08:32.609 11001.626 - 11054.265: 93.3859% ( 3) 00:08:32.609 11054.265 - 11106.904: 93.4314% ( 6) 00:08:32.609 11106.904 - 11159.544: 93.4390% ( 1) 00:08:32.609 11159.544 - 11212.183: 93.4466% ( 1) 00:08:32.609 11212.183 - 11264.822: 93.4769% ( 4) 00:08:32.609 11264.822 - 11317.462: 93.5073% ( 4) 00:08:32.609 11317.462 - 11370.101: 93.5452% ( 5) 00:08:32.609 11370.101 - 11422.741: 93.6135% ( 9) 00:08:32.609 11422.741 - 11475.380: 93.7803% ( 22) 00:08:32.609 11475.380 - 11528.019: 94.0306% ( 33) 00:08:32.609 11528.019 - 11580.659: 94.2734% ( 32) 00:08:32.609 11580.659 - 11633.298: 94.5464% ( 36) 00:08:32.609 11633.298 - 11685.937: 94.7512% ( 27) 00:08:32.609 11685.937 - 11738.577: 94.9029% ( 20) 00:08:32.609 11738.577 - 11791.216: 94.9939% ( 12) 00:08:32.609 11791.216 - 11843.855: 95.0698% ( 10) 00:08:32.609 11843.855 - 11896.495: 95.1532% ( 11) 00:08:32.609 11896.495 - 11949.134: 95.2594% ( 14) 00:08:32.609 11949.134 - 12001.773: 95.3656% ( 14) 00:08:32.609 12001.773 - 12054.413: 95.5021% ( 18) 00:08:32.609 12054.413 - 12107.052: 95.6387% ( 18) 00:08:32.609 12107.052 - 12159.692: 95.7600% ( 16) 00:08:32.609 12159.692 - 12212.331: 95.8814% ( 16) 00:08:32.609 12212.331 - 12264.970: 95.9648% ( 11) 00:08:32.609 12264.970 - 12317.610: 96.0407% ( 10) 00:08:32.609 12317.610 - 12370.249: 96.1013% ( 8) 00:08:32.609 12370.249 - 12422.888: 96.1772% ( 10) 00:08:32.609 12422.888 - 12475.528: 96.2758% ( 13) 00:08:32.609 12475.528 - 12528.167: 96.3592% ( 11) 00:08:32.609 12528.167 - 12580.806: 96.4502% ( 12) 00:08:32.609 12580.806 - 12633.446: 96.5185% ( 9) 00:08:32.609 12633.446 - 12686.085: 96.6854% ( 22) 00:08:32.609 12686.085 - 12738.724: 96.8219% ( 18) 00:08:32.609 12738.724 - 12791.364: 96.8674% ( 6) 00:08:32.609 12791.364 - 12844.003: 96.8902% ( 3) 00:08:32.609 12844.003 - 12896.643: 96.9129% ( 3) 00:08:32.609 12896.643 - 12949.282: 96.9508% ( 5) 00:08:32.609 12949.282 - 13001.921: 96.9812% ( 4) 00:08:32.609 13001.921 - 13054.561: 97.0039% ( 3) 00:08:32.609 13054.561 - 13107.200: 97.0191% ( 2) 00:08:32.609 13107.200 - 13159.839: 97.0570% ( 5) 00:08:32.609 13159.839 - 13212.479: 97.0874% ( 4) 00:08:32.609 13212.479 - 13265.118: 97.0950% ( 1) 00:08:32.609 13265.118 - 13317.757: 97.1177% ( 3) 00:08:32.609 13317.757 - 13370.397: 97.1329% ( 2) 00:08:32.609 13370.397 - 13423.036: 97.1481% ( 2) 00:08:32.609 13423.036 - 13475.676: 97.2012% ( 7) 00:08:32.609 13475.676 - 13580.954: 97.2770% ( 10) 00:08:32.609 13580.954 - 13686.233: 97.3301% ( 7) 00:08:32.609 13686.233 - 13791.512: 97.3756% ( 6) 00:08:32.609 13791.512 - 13896.790: 97.4515% ( 10) 00:08:32.609 13896.790 - 14002.069: 97.5349% ( 11) 00:08:32.609 14002.069 - 14107.348: 97.7397% ( 27) 00:08:32.609 14107.348 - 14212.627: 97.9217% ( 24) 00:08:32.609 14212.627 - 14317.905: 98.0127% ( 12) 00:08:32.609 14317.905 - 14423.184: 98.0734% ( 8) 00:08:32.609 14423.184 - 14528.463: 98.1493% ( 10) 00:08:32.609 14528.463 - 14633.741: 98.2403% ( 12) 00:08:32.609 14633.741 - 14739.020: 98.3768% ( 18) 00:08:32.609 14739.020 - 14844.299: 98.4982% ( 16) 00:08:32.609 14844.299 - 14949.578: 98.6271% ( 17) 00:08:32.609 14949.578 - 15054.856: 98.6726% ( 6) 00:08:32.609 15054.856 - 15160.135: 98.7485% ( 10) 00:08:32.609 15160.135 - 15265.414: 98.8547% ( 14) 00:08:32.609 15265.414 - 15370.692: 98.9684% ( 15) 00:08:32.609 15370.692 - 15475.971: 98.9912% ( 3) 00:08:32.609 15475.971 - 15581.250: 99.0215% ( 4) 00:08:32.609 15897.086 - 16002.365: 99.0291% ( 1) 00:08:32.609 30530.827 - 30741.385: 99.0443% ( 2) 00:08:32.609 30741.385 - 30951.942: 99.1050% ( 8) 00:08:32.609 30951.942 - 31162.500: 99.2036% ( 13) 00:08:32.609 31162.500 - 31373.057: 99.3022% ( 13) 00:08:32.609 31373.057 - 31583.614: 99.3553% ( 7) 00:08:32.609 31583.614 - 31794.172: 99.4084% ( 7) 00:08:32.609 31794.172 - 32004.729: 99.4311% ( 3) 00:08:32.609 32004.729 - 32215.287: 99.4842% ( 7) 00:08:32.609 32215.287 - 32425.844: 99.5146% ( 4) 00:08:32.609 38321.452 - 38532.010: 99.5601% ( 6) 00:08:32.609 38532.010 - 38742.567: 99.6208% ( 8) 00:08:32.609 38742.567 - 38953.124: 99.6738% ( 7) 00:08:32.609 38953.124 - 39163.682: 99.7269% ( 7) 00:08:32.609 39163.682 - 39374.239: 99.7876% ( 8) 00:08:32.609 39374.239 - 39584.797: 99.8331% ( 6) 00:08:32.609 39584.797 - 39795.354: 99.8786% ( 6) 00:08:32.609 39795.354 - 40005.912: 99.9469% ( 9) 00:08:32.609 40005.912 - 40216.469: 99.9924% ( 6) 00:08:32.609 40216.469 - 40427.027: 100.0000% ( 1) 00:08:32.609 00:08:32.609 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:32.609 ============================================================================== 00:08:32.609 Range in us Cumulative IO count 00:08:32.609 7790.625 - 7843.264: 0.0076% ( 1) 00:08:32.609 7895.904 - 7948.543: 0.0228% ( 2) 00:08:32.609 7948.543 - 8001.182: 0.0758% ( 7) 00:08:32.609 8001.182 - 8053.822: 0.1517% ( 10) 00:08:32.609 8053.822 - 8106.461: 0.2200% ( 9) 00:08:32.609 8106.461 - 8159.100: 0.3186% ( 13) 00:08:32.609 8159.100 - 8211.740: 0.4096% ( 12) 00:08:32.609 8211.740 - 8264.379: 0.4930% ( 11) 00:08:32.609 8264.379 - 8317.018: 0.8495% ( 47) 00:08:32.609 8317.018 - 8369.658: 1.0316% ( 24) 00:08:32.609 8369.658 - 8422.297: 1.2060% ( 23) 00:08:32.609 8422.297 - 8474.937: 1.5549% ( 46) 00:08:32.609 8474.937 - 8527.576: 1.9114% ( 47) 00:08:32.609 8527.576 - 8580.215: 2.5334% ( 82) 00:08:32.609 8580.215 - 8632.855: 3.7242% ( 157) 00:08:32.609 8632.855 - 8685.494: 5.2867% ( 206) 00:08:32.609 8685.494 - 8738.133: 6.8796% ( 210) 00:08:32.609 8738.133 - 8790.773: 8.5558% ( 221) 00:08:32.609 8790.773 - 8843.412: 10.9603% ( 317) 00:08:32.609 8843.412 - 8896.051: 14.5934% ( 479) 00:08:32.609 8896.051 - 8948.691: 17.8171% ( 425) 00:08:32.609 8948.691 - 9001.330: 21.5185% ( 488) 00:08:32.609 9001.330 - 9053.969: 25.2503% ( 492) 00:08:32.609 9053.969 - 9106.609: 29.2248% ( 524) 00:08:32.609 9106.609 - 9159.248: 33.9123% ( 618) 00:08:32.609 9159.248 - 9211.888: 38.9032% ( 658) 00:08:32.609 9211.888 - 9264.527: 43.8486% ( 652) 00:08:32.609 9264.527 - 9317.166: 48.7257% ( 643) 00:08:32.609 9317.166 - 9369.806: 53.6029% ( 643) 00:08:32.609 9369.806 - 9422.445: 58.5710% ( 655) 00:08:32.609 9422.445 - 9475.084: 63.6757% ( 673) 00:08:32.609 9475.084 - 9527.724: 68.3252% ( 613) 00:08:32.609 9527.724 - 9580.363: 71.9964% ( 484) 00:08:32.609 9580.363 - 9633.002: 75.1896% ( 421) 00:08:32.609 9633.002 - 9685.642: 77.7533% ( 338) 00:08:32.609 9685.642 - 9738.281: 80.2488% ( 329) 00:08:32.609 9738.281 - 9790.920: 82.2664% ( 266) 00:08:32.609 9790.920 - 9843.560: 83.8668% ( 211) 00:08:32.609 9843.560 - 9896.199: 85.4521% ( 209) 00:08:32.609 9896.199 - 9948.839: 86.5595% ( 146) 00:08:32.609 9948.839 - 10001.478: 87.5228% ( 127) 00:08:32.609 10001.478 - 10054.117: 88.2357% ( 94) 00:08:32.609 10054.117 - 10106.757: 88.9336% ( 92) 00:08:32.609 10106.757 - 10159.396: 89.4721% ( 71) 00:08:32.609 10159.396 - 10212.035: 90.1092% ( 84) 00:08:32.609 10212.035 - 10264.675: 90.5416% ( 57) 00:08:32.609 10264.675 - 10317.314: 90.9739% ( 57) 00:08:32.609 10317.314 - 10369.953: 91.3304% ( 47) 00:08:32.609 10369.953 - 10422.593: 91.7400% ( 54) 00:08:32.609 10422.593 - 10475.232: 92.0965% ( 47) 00:08:32.609 10475.232 - 10527.871: 92.4150% ( 42) 00:08:32.609 10527.871 - 10580.511: 92.7260% ( 41) 00:08:32.609 10580.511 - 10633.150: 92.9157% ( 25) 00:08:32.609 10633.150 - 10685.790: 93.0370% ( 16) 00:08:32.609 10685.790 - 10738.429: 93.1053% ( 9) 00:08:32.609 10738.429 - 10791.068: 93.1584% ( 7) 00:08:32.609 10791.068 - 10843.708: 93.2494% ( 12) 00:08:32.609 10843.708 - 10896.347: 93.4163% ( 22) 00:08:32.609 10896.347 - 10948.986: 93.5073% ( 12) 00:08:32.609 10948.986 - 11001.626: 93.5376% ( 4) 00:08:32.609 11001.626 - 11054.265: 93.5604% ( 3) 00:08:32.609 11054.265 - 11106.904: 93.5831% ( 3) 00:08:32.609 11106.904 - 11159.544: 93.6211% ( 5) 00:08:32.609 11159.544 - 11212.183: 93.6893% ( 9) 00:08:32.609 11212.183 - 11264.822: 93.7955% ( 14) 00:08:32.609 11264.822 - 11317.462: 93.9927% ( 26) 00:08:32.609 11317.462 - 11370.101: 94.2354% ( 32) 00:08:32.609 11370.101 - 11422.741: 94.4023% ( 22) 00:08:32.609 11422.741 - 11475.380: 94.4933% ( 12) 00:08:32.609 11475.380 - 11528.019: 94.5843% ( 12) 00:08:32.609 11528.019 - 11580.659: 94.6829% ( 13) 00:08:32.610 11580.659 - 11633.298: 94.7436% ( 8) 00:08:32.610 11633.298 - 11685.937: 94.7967% ( 7) 00:08:32.610 11685.937 - 11738.577: 94.8422% ( 6) 00:08:32.610 11738.577 - 11791.216: 94.8877% ( 6) 00:08:32.610 11791.216 - 11843.855: 94.9333% ( 6) 00:08:32.610 11843.855 - 11896.495: 94.9636% ( 4) 00:08:32.610 11896.495 - 11949.134: 94.9939% ( 4) 00:08:32.610 11949.134 - 12001.773: 95.0470% ( 7) 00:08:32.610 12001.773 - 12054.413: 95.1153% ( 9) 00:08:32.610 12054.413 - 12107.052: 95.1608% ( 6) 00:08:32.610 12107.052 - 12159.692: 95.1987% ( 5) 00:08:32.610 12159.692 - 12212.331: 95.2518% ( 7) 00:08:32.610 12212.331 - 12264.970: 95.3125% ( 8) 00:08:32.610 12264.970 - 12317.610: 95.4035% ( 12) 00:08:32.610 12317.610 - 12370.249: 95.5097% ( 14) 00:08:32.610 12370.249 - 12422.888: 95.6387% ( 17) 00:08:32.610 12422.888 - 12475.528: 95.8359% ( 26) 00:08:32.610 12475.528 - 12528.167: 96.0710% ( 31) 00:08:32.610 12528.167 - 12580.806: 96.1924% ( 16) 00:08:32.610 12580.806 - 12633.446: 96.2758% ( 11) 00:08:32.610 12633.446 - 12686.085: 96.3820% ( 14) 00:08:32.610 12686.085 - 12738.724: 96.4806% ( 13) 00:08:32.610 12738.724 - 12791.364: 96.6019% ( 16) 00:08:32.610 12791.364 - 12844.003: 96.7157% ( 15) 00:08:32.610 12844.003 - 12896.643: 96.8143% ( 13) 00:08:32.610 12896.643 - 12949.282: 96.8902% ( 10) 00:08:32.610 12949.282 - 13001.921: 96.9584% ( 9) 00:08:32.610 13001.921 - 13054.561: 97.0191% ( 8) 00:08:32.610 13054.561 - 13107.200: 97.0495% ( 4) 00:08:32.610 13107.200 - 13159.839: 97.0874% ( 5) 00:08:32.610 13159.839 - 13212.479: 97.1253% ( 5) 00:08:32.610 13212.479 - 13265.118: 97.1784% ( 7) 00:08:32.610 13265.118 - 13317.757: 97.2922% ( 15) 00:08:32.610 13317.757 - 13370.397: 97.4211% ( 17) 00:08:32.610 13370.397 - 13423.036: 97.4666% ( 6) 00:08:32.610 13423.036 - 13475.676: 97.4818% ( 2) 00:08:32.610 13475.676 - 13580.954: 97.5121% ( 4) 00:08:32.610 13580.954 - 13686.233: 97.5501% ( 5) 00:08:32.610 13686.233 - 13791.512: 97.7321% ( 24) 00:08:32.610 13791.512 - 13896.790: 97.8762% ( 19) 00:08:32.610 13896.790 - 14002.069: 97.9445% ( 9) 00:08:32.610 14002.069 - 14107.348: 97.9672% ( 3) 00:08:32.610 14107.348 - 14212.627: 97.9900% ( 3) 00:08:32.610 14212.627 - 14317.905: 98.0127% ( 3) 00:08:32.610 14317.905 - 14423.184: 98.0355% ( 3) 00:08:32.610 14423.184 - 14528.463: 98.0583% ( 3) 00:08:32.610 14739.020 - 14844.299: 98.0962% ( 5) 00:08:32.610 14844.299 - 14949.578: 98.1265% ( 4) 00:08:32.610 14949.578 - 15054.856: 98.2251% ( 13) 00:08:32.610 15054.856 - 15160.135: 98.3237% ( 13) 00:08:32.610 15160.135 - 15265.414: 98.4072% ( 11) 00:08:32.610 15265.414 - 15370.692: 98.4906% ( 11) 00:08:32.610 15370.692 - 15475.971: 98.5740% ( 11) 00:08:32.610 15475.971 - 15581.250: 98.6423% ( 9) 00:08:32.610 15581.250 - 15686.529: 98.7257% ( 11) 00:08:32.610 15686.529 - 15791.807: 98.8243% ( 13) 00:08:32.610 15791.807 - 15897.086: 98.9078% ( 11) 00:08:32.610 15897.086 - 16002.365: 98.9760% ( 9) 00:08:32.610 16002.365 - 16107.643: 99.0064% ( 4) 00:08:32.610 16107.643 - 16212.922: 99.0291% ( 3) 00:08:32.610 28846.368 - 29056.925: 99.0595% ( 4) 00:08:32.610 29056.925 - 29267.483: 99.1201% ( 8) 00:08:32.610 29267.483 - 29478.040: 99.1808% ( 8) 00:08:32.610 29478.040 - 29688.598: 99.2415% ( 8) 00:08:32.610 29688.598 - 29899.155: 99.2946% ( 7) 00:08:32.610 29899.155 - 30109.712: 99.3553% ( 8) 00:08:32.610 30109.712 - 30320.270: 99.4160% ( 8) 00:08:32.610 30320.270 - 30530.827: 99.4691% ( 7) 00:08:32.610 30530.827 - 30741.385: 99.5146% ( 6) 00:08:32.610 36426.435 - 36636.993: 99.5297% ( 2) 00:08:32.610 36636.993 - 36847.550: 99.5904% ( 8) 00:08:32.610 36847.550 - 37058.108: 99.6511% ( 8) 00:08:32.610 37058.108 - 37268.665: 99.7118% ( 8) 00:08:32.610 37268.665 - 37479.222: 99.7876% ( 10) 00:08:32.610 37479.222 - 37689.780: 99.8483% ( 8) 00:08:32.610 37689.780 - 37900.337: 99.9014% ( 7) 00:08:32.610 37900.337 - 38110.895: 99.9621% ( 8) 00:08:32.610 38110.895 - 38321.452: 100.0000% ( 5) 00:08:32.610 00:08:32.610 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:32.610 ============================================================================== 00:08:32.610 Range in us Cumulative IO count 00:08:32.610 8001.182 - 8053.822: 0.0228% ( 3) 00:08:32.610 8106.461 - 8159.100: 0.0834% ( 8) 00:08:32.610 8159.100 - 8211.740: 0.1669% ( 11) 00:08:32.610 8211.740 - 8264.379: 0.3413% ( 23) 00:08:32.610 8264.379 - 8317.018: 0.6068% ( 35) 00:08:32.610 8317.018 - 8369.658: 0.8495% ( 32) 00:08:32.610 8369.658 - 8422.297: 1.1074% ( 34) 00:08:32.610 8422.297 - 8474.937: 1.4715% ( 48) 00:08:32.610 8474.937 - 8527.576: 2.1238% ( 86) 00:08:32.610 8527.576 - 8580.215: 2.9885% ( 114) 00:08:32.610 8580.215 - 8632.855: 4.3462% ( 179) 00:08:32.610 8632.855 - 8685.494: 5.6735% ( 175) 00:08:32.610 8685.494 - 8738.133: 7.2057% ( 202) 00:08:32.610 8738.133 - 8790.773: 9.4357% ( 294) 00:08:32.610 8790.773 - 8843.412: 11.4457% ( 265) 00:08:32.610 8843.412 - 8896.051: 14.2673% ( 372) 00:08:32.610 8896.051 - 8948.691: 17.2027% ( 387) 00:08:32.610 8948.691 - 9001.330: 20.6993% ( 461) 00:08:32.610 9001.330 - 9053.969: 25.4096% ( 621) 00:08:32.610 9053.969 - 9106.609: 29.1414% ( 492) 00:08:32.610 9106.609 - 9159.248: 34.0033% ( 641) 00:08:32.610 9159.248 - 9211.888: 39.1232% ( 675) 00:08:32.610 9211.888 - 9264.527: 44.0989% ( 656) 00:08:32.610 9264.527 - 9317.166: 49.7194% ( 741) 00:08:32.610 9317.166 - 9369.806: 54.7254% ( 660) 00:08:32.610 9369.806 - 9422.445: 59.1930% ( 589) 00:08:32.610 9422.445 - 9475.084: 63.3874% ( 553) 00:08:32.610 9475.084 - 9527.724: 67.4681% ( 538) 00:08:32.610 9527.724 - 9580.363: 70.7904% ( 438) 00:08:32.610 9580.363 - 9633.002: 73.9457% ( 416) 00:08:32.610 9633.002 - 9685.642: 76.8356% ( 381) 00:08:32.610 9685.642 - 9738.281: 79.5282% ( 355) 00:08:32.610 9738.281 - 9790.920: 81.6672% ( 282) 00:08:32.610 9790.920 - 9843.560: 83.5862% ( 253) 00:08:32.610 9843.560 - 9896.199: 85.1183% ( 202) 00:08:32.610 9896.199 - 9948.839: 86.3698% ( 165) 00:08:32.610 9948.839 - 10001.478: 87.5076% ( 150) 00:08:32.610 10001.478 - 10054.117: 88.3950% ( 117) 00:08:32.610 10054.117 - 10106.757: 89.2142% ( 108) 00:08:32.610 10106.757 - 10159.396: 89.8968% ( 90) 00:08:32.610 10159.396 - 10212.035: 90.5643% ( 88) 00:08:32.610 10212.035 - 10264.675: 91.0725% ( 67) 00:08:32.610 10264.675 - 10317.314: 91.4897% ( 55) 00:08:32.610 10317.314 - 10369.953: 91.8234% ( 44) 00:08:32.610 10369.953 - 10422.593: 92.0130% ( 25) 00:08:32.610 10422.593 - 10475.232: 92.1951% ( 24) 00:08:32.610 10475.232 - 10527.871: 92.3771% ( 24) 00:08:32.610 10527.871 - 10580.511: 92.6957% ( 42) 00:08:32.610 10580.511 - 10633.150: 92.9081% ( 28) 00:08:32.610 10633.150 - 10685.790: 93.1584% ( 33) 00:08:32.610 10685.790 - 10738.429: 93.4390% ( 37) 00:08:32.610 10738.429 - 10791.068: 93.7197% ( 37) 00:08:32.610 10791.068 - 10843.708: 93.8941% ( 23) 00:08:32.610 10843.708 - 10896.347: 94.0382% ( 19) 00:08:32.610 10896.347 - 10948.986: 94.1444% ( 14) 00:08:32.610 10948.986 - 11001.626: 94.2961% ( 20) 00:08:32.610 11001.626 - 11054.265: 94.4706% ( 23) 00:08:32.610 11054.265 - 11106.904: 94.5616% ( 12) 00:08:32.610 11106.904 - 11159.544: 94.6071% ( 6) 00:08:32.610 11159.544 - 11212.183: 94.6223% ( 2) 00:08:32.610 11212.183 - 11264.822: 94.6450% ( 3) 00:08:32.610 11264.822 - 11317.462: 94.6602% ( 2) 00:08:32.610 11475.380 - 11528.019: 94.6678% ( 1) 00:08:32.610 11528.019 - 11580.659: 94.6905% ( 3) 00:08:32.610 11580.659 - 11633.298: 94.7285% ( 5) 00:08:32.610 11633.298 - 11685.937: 94.7740% ( 6) 00:08:32.610 11685.937 - 11738.577: 94.8195% ( 6) 00:08:32.610 11738.577 - 11791.216: 94.8574% ( 5) 00:08:32.610 11791.216 - 11843.855: 94.8877% ( 4) 00:08:32.610 11843.855 - 11896.495: 94.9333% ( 6) 00:08:32.610 11896.495 - 11949.134: 94.9939% ( 8) 00:08:32.610 11949.134 - 12001.773: 95.0850% ( 12) 00:08:32.610 12001.773 - 12054.413: 95.1684% ( 11) 00:08:32.610 12054.413 - 12107.052: 95.2215% ( 7) 00:08:32.610 12107.052 - 12159.692: 95.2670% ( 6) 00:08:32.610 12159.692 - 12212.331: 95.3125% ( 6) 00:08:32.610 12212.331 - 12264.970: 95.3808% ( 9) 00:08:32.610 12264.970 - 12317.610: 95.4642% ( 11) 00:08:32.610 12317.610 - 12370.249: 95.5628% ( 13) 00:08:32.610 12370.249 - 12422.888: 95.6387% ( 10) 00:08:32.610 12422.888 - 12475.528: 95.7448% ( 14) 00:08:32.610 12475.528 - 12528.167: 95.9269% ( 24) 00:08:32.610 12528.167 - 12580.806: 96.0331% ( 14) 00:08:32.610 12580.806 - 12633.446: 96.1620% ( 17) 00:08:32.610 12633.446 - 12686.085: 96.2454% ( 11) 00:08:32.610 12686.085 - 12738.724: 96.3744% ( 17) 00:08:32.610 12738.724 - 12791.364: 96.4958% ( 16) 00:08:32.610 12791.364 - 12844.003: 96.6019% ( 14) 00:08:32.610 12844.003 - 12896.643: 96.7688% ( 22) 00:08:32.610 12896.643 - 12949.282: 96.9129% ( 19) 00:08:32.610 12949.282 - 13001.921: 97.0722% ( 21) 00:08:32.610 13001.921 - 13054.561: 97.2315% ( 21) 00:08:32.610 13054.561 - 13107.200: 97.3073% ( 10) 00:08:32.610 13107.200 - 13159.839: 97.3529% ( 6) 00:08:32.610 13159.839 - 13212.479: 97.3908% ( 5) 00:08:32.610 13212.479 - 13265.118: 97.4135% ( 3) 00:08:32.610 13265.118 - 13317.757: 97.4590% ( 6) 00:08:32.610 13317.757 - 13370.397: 97.5576% ( 13) 00:08:32.610 13370.397 - 13423.036: 97.7018% ( 19) 00:08:32.610 13423.036 - 13475.676: 97.7700% ( 9) 00:08:32.610 13475.676 - 13580.954: 97.8838% ( 15) 00:08:32.610 13580.954 - 13686.233: 97.9369% ( 7) 00:08:32.610 13686.233 - 13791.512: 97.9672% ( 4) 00:08:32.610 13791.512 - 13896.790: 97.9900% ( 3) 00:08:32.610 13896.790 - 14002.069: 98.0127% ( 3) 00:08:32.610 14002.069 - 14107.348: 98.0355% ( 3) 00:08:32.610 14107.348 - 14212.627: 98.0583% ( 3) 00:08:32.610 14949.578 - 15054.856: 98.0658% ( 1) 00:08:32.610 15054.856 - 15160.135: 98.0810% ( 2) 00:08:32.610 15265.414 - 15370.692: 98.1417% ( 8) 00:08:32.610 15370.692 - 15475.971: 98.2327% ( 12) 00:08:32.610 15475.971 - 15581.250: 98.3389% ( 14) 00:08:32.610 15581.250 - 15686.529: 98.5892% ( 33) 00:08:32.610 15686.529 - 15791.807: 98.8850% ( 39) 00:08:32.611 15791.807 - 15897.086: 99.0215% ( 18) 00:08:32.611 15897.086 - 16002.365: 99.0291% ( 1) 00:08:32.611 28425.253 - 28635.810: 99.0822% ( 7) 00:08:32.611 28635.810 - 28846.368: 99.1429% ( 8) 00:08:32.611 28846.368 - 29056.925: 99.2112% ( 9) 00:08:32.611 29056.925 - 29267.483: 99.2718% ( 8) 00:08:32.611 29267.483 - 29478.040: 99.3325% ( 8) 00:08:32.611 29478.040 - 29688.598: 99.3932% ( 8) 00:08:32.611 29688.598 - 29899.155: 99.4539% ( 8) 00:08:32.611 29899.155 - 30109.712: 99.5146% ( 8) 00:08:32.611 35163.091 - 35373.648: 99.5297% ( 2) 00:08:32.611 35373.648 - 35584.206: 99.5904% ( 8) 00:08:32.611 35584.206 - 35794.763: 99.6435% ( 7) 00:08:32.611 35794.763 - 36005.320: 99.7042% ( 8) 00:08:32.611 36005.320 - 36215.878: 99.7725% ( 9) 00:08:32.611 36215.878 - 36426.435: 99.8255% ( 7) 00:08:32.611 36426.435 - 36636.993: 99.8862% ( 8) 00:08:32.611 36636.993 - 36847.550: 99.9469% ( 8) 00:08:32.611 36847.550 - 37058.108: 100.0000% ( 7) 00:08:32.611 00:08:32.611 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:32.611 ============================================================================== 00:08:32.611 Range in us Cumulative IO count 00:08:32.611 7948.543 - 8001.182: 0.0076% ( 1) 00:08:32.611 8053.822 - 8106.461: 0.0152% ( 1) 00:08:32.611 8159.100 - 8211.740: 0.0607% ( 6) 00:08:32.611 8211.740 - 8264.379: 0.1593% ( 13) 00:08:32.611 8264.379 - 8317.018: 0.3034% ( 19) 00:08:32.611 8317.018 - 8369.658: 0.5992% ( 39) 00:08:32.611 8369.658 - 8422.297: 0.8799% ( 37) 00:08:32.611 8422.297 - 8474.937: 1.1757% ( 39) 00:08:32.611 8474.937 - 8527.576: 1.6763% ( 66) 00:08:32.611 8527.576 - 8580.215: 2.2376% ( 74) 00:08:32.611 8580.215 - 8632.855: 3.1553% ( 121) 00:08:32.611 8632.855 - 8685.494: 4.2855% ( 149) 00:08:32.611 8685.494 - 8738.133: 6.0452% ( 232) 00:08:32.611 8738.133 - 8790.773: 8.4117% ( 312) 00:08:32.611 8790.773 - 8843.412: 10.9299% ( 332) 00:08:32.611 8843.412 - 8896.051: 14.3887% ( 456) 00:08:32.611 8896.051 - 8948.691: 17.4833% ( 408) 00:08:32.611 8948.691 - 9001.330: 21.0482% ( 470) 00:08:32.611 9001.330 - 9053.969: 25.6447% ( 606) 00:08:32.611 9053.969 - 9106.609: 30.0212% ( 577) 00:08:32.611 9106.609 - 9159.248: 34.9666% ( 652) 00:08:32.611 9159.248 - 9211.888: 39.5252% ( 601) 00:08:32.611 9211.888 - 9264.527: 44.3037% ( 630) 00:08:32.611 9264.527 - 9317.166: 49.4008% ( 672) 00:08:32.611 9317.166 - 9369.806: 54.5661% ( 681) 00:08:32.611 9369.806 - 9422.445: 59.0564% ( 592) 00:08:32.611 9422.445 - 9475.084: 63.8653% ( 634) 00:08:32.611 9475.084 - 9527.724: 67.7109% ( 507) 00:08:32.611 9527.724 - 9580.363: 71.3744% ( 483) 00:08:32.611 9580.363 - 9633.002: 74.6966% ( 438) 00:08:32.611 9633.002 - 9685.642: 77.6396% ( 388) 00:08:32.611 9685.642 - 9738.281: 80.2336% ( 342) 00:08:32.611 9738.281 - 9790.920: 82.3119% ( 274) 00:08:32.611 9790.920 - 9843.560: 84.1854% ( 247) 00:08:32.611 9843.560 - 9896.199: 85.4976% ( 173) 00:08:32.611 9896.199 - 9948.839: 86.8629% ( 180) 00:08:32.611 9948.839 - 10001.478: 88.0006% ( 150) 00:08:32.611 10001.478 - 10054.117: 88.7212% ( 95) 00:08:32.611 10054.117 - 10106.757: 89.2900% ( 75) 00:08:32.611 10106.757 - 10159.396: 89.7603% ( 62) 00:08:32.611 10159.396 - 10212.035: 90.1775% ( 55) 00:08:32.611 10212.035 - 10264.675: 90.6022% ( 56) 00:08:32.611 10264.675 - 10317.314: 90.9208% ( 42) 00:08:32.611 10317.314 - 10369.953: 91.1484% ( 30) 00:08:32.611 10369.953 - 10422.593: 91.4821% ( 44) 00:08:32.611 10422.593 - 10475.232: 91.8158% ( 44) 00:08:32.611 10475.232 - 10527.871: 92.3013% ( 64) 00:08:32.611 10527.871 - 10580.511: 92.6805% ( 50) 00:08:32.611 10580.511 - 10633.150: 93.1432% ( 61) 00:08:32.611 10633.150 - 10685.790: 93.6362% ( 65) 00:08:32.611 10685.790 - 10738.429: 94.0306% ( 52) 00:08:32.611 10738.429 - 10791.068: 94.2734% ( 32) 00:08:32.611 10791.068 - 10843.708: 94.4175% ( 19) 00:08:32.611 10843.708 - 10896.347: 94.5009% ( 11) 00:08:32.611 10896.347 - 10948.986: 94.5692% ( 9) 00:08:32.611 10948.986 - 11001.626: 94.6223% ( 7) 00:08:32.611 11001.626 - 11054.265: 94.6526% ( 4) 00:08:32.611 11054.265 - 11106.904: 94.6602% ( 1) 00:08:32.611 11738.577 - 11791.216: 94.6678% ( 1) 00:08:32.611 11896.495 - 11949.134: 94.6981% ( 4) 00:08:32.611 11949.134 - 12001.773: 94.7285% ( 4) 00:08:32.611 12001.773 - 12054.413: 94.7360% ( 1) 00:08:32.611 12054.413 - 12107.052: 94.7740% ( 5) 00:08:32.611 12107.052 - 12159.692: 94.8195% ( 6) 00:08:32.611 12159.692 - 12212.331: 94.9029% ( 11) 00:08:32.611 12212.331 - 12264.970: 95.0546% ( 20) 00:08:32.611 12264.970 - 12317.610: 95.1987% ( 19) 00:08:32.611 12317.610 - 12370.249: 95.3959% ( 26) 00:08:32.611 12370.249 - 12422.888: 95.7221% ( 43) 00:08:32.611 12422.888 - 12475.528: 95.9269% ( 27) 00:08:32.611 12475.528 - 12528.167: 96.0786% ( 20) 00:08:32.611 12528.167 - 12580.806: 96.1772% ( 13) 00:08:32.611 12580.806 - 12633.446: 96.3061% ( 17) 00:08:32.611 12633.446 - 12686.085: 96.4654% ( 21) 00:08:32.611 12686.085 - 12738.724: 96.6171% ( 20) 00:08:32.611 12738.724 - 12791.364: 96.7688% ( 20) 00:08:32.611 12791.364 - 12844.003: 96.8750% ( 14) 00:08:32.611 12844.003 - 12896.643: 96.9357% ( 8) 00:08:32.611 12896.643 - 12949.282: 97.0495% ( 15) 00:08:32.611 12949.282 - 13001.921: 97.1708% ( 16) 00:08:32.611 13001.921 - 13054.561: 97.2694% ( 13) 00:08:32.611 13054.561 - 13107.200: 97.3529% ( 11) 00:08:32.611 13107.200 - 13159.839: 97.4135% ( 8) 00:08:32.611 13159.839 - 13212.479: 97.4666% ( 7) 00:08:32.611 13212.479 - 13265.118: 97.5121% ( 6) 00:08:32.611 13265.118 - 13317.757: 97.5501% ( 5) 00:08:32.611 13317.757 - 13370.397: 97.5804% ( 4) 00:08:32.611 13370.397 - 13423.036: 97.6107% ( 4) 00:08:32.611 13423.036 - 13475.676: 97.6411% ( 4) 00:08:32.611 13475.676 - 13580.954: 97.7549% ( 15) 00:08:32.611 13580.954 - 13686.233: 97.9066% ( 20) 00:08:32.611 13686.233 - 13791.512: 97.9824% ( 10) 00:08:32.611 13791.512 - 13896.790: 98.0127% ( 4) 00:08:32.611 13896.790 - 14002.069: 98.0431% ( 4) 00:08:32.611 14002.069 - 14107.348: 98.0583% ( 2) 00:08:32.611 14949.578 - 15054.856: 98.0962% ( 5) 00:08:32.611 15054.856 - 15160.135: 98.2555% ( 21) 00:08:32.611 15160.135 - 15265.414: 98.3844% ( 17) 00:08:32.611 15265.414 - 15370.692: 98.3996% ( 2) 00:08:32.611 15370.692 - 15475.971: 98.4223% ( 3) 00:08:32.611 15475.971 - 15581.250: 98.4451% ( 3) 00:08:32.611 15581.250 - 15686.529: 98.4754% ( 4) 00:08:32.611 15686.529 - 15791.807: 98.5589% ( 11) 00:08:32.611 15791.807 - 15897.086: 98.6347% ( 10) 00:08:32.611 15897.086 - 16002.365: 98.7030% ( 9) 00:08:32.611 16002.365 - 16107.643: 98.8698% ( 22) 00:08:32.611 16107.643 - 16212.922: 99.0291% ( 21) 00:08:32.611 26635.515 - 26740.794: 99.0367% ( 1) 00:08:32.611 26740.794 - 26846.072: 99.0671% ( 4) 00:08:32.611 26846.072 - 26951.351: 99.0974% ( 4) 00:08:32.611 26951.351 - 27161.908: 99.1581% ( 8) 00:08:32.611 27161.908 - 27372.466: 99.2188% ( 8) 00:08:32.611 27372.466 - 27583.023: 99.2718% ( 7) 00:08:32.611 27583.023 - 27793.581: 99.3325% ( 8) 00:08:32.611 27793.581 - 28004.138: 99.4008% ( 9) 00:08:32.611 28004.138 - 28214.696: 99.4615% ( 8) 00:08:32.611 28214.696 - 28425.253: 99.5146% ( 7) 00:08:32.611 33899.746 - 34110.304: 99.5525% ( 5) 00:08:32.611 34110.304 - 34320.861: 99.6132% ( 8) 00:08:32.611 34320.861 - 34531.418: 99.6738% ( 8) 00:08:32.611 34531.418 - 34741.976: 99.7345% ( 8) 00:08:32.611 34741.976 - 34952.533: 99.7952% ( 8) 00:08:32.611 34952.533 - 35163.091: 99.8559% ( 8) 00:08:32.611 35163.091 - 35373.648: 99.9166% ( 8) 00:08:32.611 35373.648 - 35584.206: 99.9772% ( 8) 00:08:32.611 35584.206 - 35794.763: 100.0000% ( 3) 00:08:32.611 00:08:32.611 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:32.611 ============================================================================== 00:08:32.611 Range in us Cumulative IO count 00:08:32.611 7948.543 - 8001.182: 0.0076% ( 1) 00:08:32.611 8053.822 - 8106.461: 0.0303% ( 3) 00:08:32.611 8106.461 - 8159.100: 0.0683% ( 5) 00:08:32.611 8159.100 - 8211.740: 0.1214% ( 7) 00:08:32.611 8211.740 - 8264.379: 0.2503% ( 17) 00:08:32.611 8264.379 - 8317.018: 0.4020% ( 20) 00:08:32.611 8317.018 - 8369.658: 0.6902% ( 38) 00:08:32.611 8369.658 - 8422.297: 1.0391% ( 46) 00:08:32.611 8422.297 - 8474.937: 1.5322% ( 65) 00:08:32.611 8474.937 - 8527.576: 1.9114% ( 50) 00:08:32.611 8527.576 - 8580.215: 2.5637% ( 86) 00:08:32.611 8580.215 - 8632.855: 3.4663% ( 119) 00:08:32.611 8632.855 - 8685.494: 5.0137% ( 204) 00:08:32.611 8685.494 - 8738.133: 6.7961% ( 235) 00:08:32.611 8738.133 - 8790.773: 9.4357% ( 348) 00:08:32.611 8790.773 - 8843.412: 12.0980% ( 351) 00:08:32.611 8843.412 - 8896.051: 15.1396% ( 401) 00:08:32.611 8896.051 - 8948.691: 18.9245% ( 499) 00:08:32.611 8948.691 - 9001.330: 22.6259% ( 488) 00:08:32.611 9001.330 - 9053.969: 26.9342% ( 568) 00:08:32.611 9053.969 - 9106.609: 31.0907% ( 548) 00:08:32.611 9106.609 - 9159.248: 35.0576% ( 523) 00:08:32.611 9159.248 - 9211.888: 39.4797% ( 583) 00:08:32.611 9211.888 - 9264.527: 44.0003% ( 596) 00:08:32.611 9264.527 - 9317.166: 48.5058% ( 594) 00:08:32.611 9317.166 - 9369.806: 53.0188% ( 595) 00:08:32.611 9369.806 - 9422.445: 57.5319% ( 595) 00:08:32.611 9422.445 - 9475.084: 62.0373% ( 594) 00:08:32.611 9475.084 - 9527.724: 66.2621% ( 557) 00:08:32.611 9527.724 - 9580.363: 69.9257% ( 483) 00:08:32.611 9580.363 - 9633.002: 73.4982% ( 471) 00:08:32.611 9633.002 - 9685.642: 76.8052% ( 436) 00:08:32.611 9685.642 - 9738.281: 79.7937% ( 394) 00:08:32.611 9738.281 - 9790.920: 82.2209% ( 320) 00:08:32.611 9790.920 - 9843.560: 84.2005% ( 261) 00:08:32.611 9843.560 - 9896.199: 85.6948% ( 197) 00:08:32.611 9896.199 - 9948.839: 86.8401% ( 151) 00:08:32.611 9948.839 - 10001.478: 87.6517% ( 107) 00:08:32.611 10001.478 - 10054.117: 88.3723% ( 95) 00:08:32.611 10054.117 - 10106.757: 88.7970% ( 56) 00:08:32.611 10106.757 - 10159.396: 89.2521% ( 60) 00:08:32.611 10159.396 - 10212.035: 89.7376% ( 64) 00:08:32.611 10212.035 - 10264.675: 90.1471% ( 54) 00:08:32.612 10264.675 - 10317.314: 90.6326% ( 64) 00:08:32.612 10317.314 - 10369.953: 91.1939% ( 74) 00:08:32.612 10369.953 - 10422.593: 91.7324% ( 71) 00:08:32.612 10422.593 - 10475.232: 92.2103% ( 63) 00:08:32.612 10475.232 - 10527.871: 92.5440% ( 44) 00:08:32.612 10527.871 - 10580.511: 92.9763% ( 57) 00:08:32.612 10580.511 - 10633.150: 93.2797% ( 40) 00:08:32.612 10633.150 - 10685.790: 93.5300% ( 33) 00:08:32.612 10685.790 - 10738.429: 93.7045% ( 23) 00:08:32.612 10738.429 - 10791.068: 93.8107% ( 14) 00:08:32.612 10791.068 - 10843.708: 93.9245% ( 15) 00:08:32.612 10843.708 - 10896.347: 93.9927% ( 9) 00:08:32.612 10896.347 - 10948.986: 94.0686% ( 10) 00:08:32.612 10948.986 - 11001.626: 94.1748% ( 14) 00:08:32.612 11001.626 - 11054.265: 94.2203% ( 6) 00:08:32.612 11054.265 - 11106.904: 94.2961% ( 10) 00:08:32.612 11106.904 - 11159.544: 94.3796% ( 11) 00:08:32.612 11159.544 - 11212.183: 94.4402% ( 8) 00:08:32.612 11212.183 - 11264.822: 94.4782% ( 5) 00:08:32.612 11264.822 - 11317.462: 94.5161% ( 5) 00:08:32.612 11317.462 - 11370.101: 94.5464% ( 4) 00:08:32.612 11370.101 - 11422.741: 94.5843% ( 5) 00:08:32.612 11422.741 - 11475.380: 94.6071% ( 3) 00:08:32.612 11475.380 - 11528.019: 94.6223% ( 2) 00:08:32.612 11528.019 - 11580.659: 94.6374% ( 2) 00:08:32.612 11580.659 - 11633.298: 94.6526% ( 2) 00:08:32.612 11633.298 - 11685.937: 94.6678% ( 2) 00:08:32.612 11791.216 - 11843.855: 94.6754% ( 1) 00:08:32.612 12054.413 - 12107.052: 94.6905% ( 2) 00:08:32.612 12107.052 - 12159.692: 94.7285% ( 5) 00:08:32.612 12159.692 - 12212.331: 94.8119% ( 11) 00:08:32.612 12212.331 - 12264.970: 94.9560% ( 19) 00:08:32.612 12264.970 - 12317.610: 95.1456% ( 25) 00:08:32.612 12317.610 - 12370.249: 95.4566% ( 41) 00:08:32.612 12370.249 - 12422.888: 95.6614% ( 27) 00:08:32.612 12422.888 - 12475.528: 95.8890% ( 30) 00:08:32.612 12475.528 - 12528.167: 96.1013% ( 28) 00:08:32.612 12528.167 - 12580.806: 96.2151% ( 15) 00:08:32.612 12580.806 - 12633.446: 96.3441% ( 17) 00:08:32.612 12633.446 - 12686.085: 96.4578% ( 15) 00:08:32.612 12686.085 - 12738.724: 96.5868% ( 17) 00:08:32.612 12738.724 - 12791.364: 96.7688% ( 24) 00:08:32.612 12791.364 - 12844.003: 96.9129% ( 19) 00:08:32.612 12844.003 - 12896.643: 96.9888% ( 10) 00:08:32.612 12896.643 - 12949.282: 97.0419% ( 7) 00:08:32.612 12949.282 - 13001.921: 97.1025% ( 8) 00:08:32.612 13001.921 - 13054.561: 97.1481% ( 6) 00:08:32.612 13054.561 - 13107.200: 97.1936% ( 6) 00:08:32.612 13107.200 - 13159.839: 97.2315% ( 5) 00:08:32.612 13159.839 - 13212.479: 97.2618% ( 4) 00:08:32.612 13212.479 - 13265.118: 97.2846% ( 3) 00:08:32.612 13265.118 - 13317.757: 97.3149% ( 4) 00:08:32.612 13317.757 - 13370.397: 97.3680% ( 7) 00:08:32.612 13370.397 - 13423.036: 97.4363% ( 9) 00:08:32.612 13423.036 - 13475.676: 97.4742% ( 5) 00:08:32.612 13475.676 - 13580.954: 97.5273% ( 7) 00:08:32.612 13580.954 - 13686.233: 97.5804% ( 7) 00:08:32.612 13686.233 - 13791.512: 97.6411% ( 8) 00:08:32.612 13791.512 - 13896.790: 97.6866% ( 6) 00:08:32.612 13896.790 - 14002.069: 97.7397% ( 7) 00:08:32.612 14002.069 - 14107.348: 97.8838% ( 19) 00:08:32.612 14107.348 - 14212.627: 97.9521% ( 9) 00:08:32.612 14212.627 - 14317.905: 97.9824% ( 4) 00:08:32.612 14317.905 - 14423.184: 98.0052% ( 3) 00:08:32.612 14423.184 - 14528.463: 98.0810% ( 10) 00:08:32.612 14528.463 - 14633.741: 98.2251% ( 19) 00:08:32.612 14633.741 - 14739.020: 98.3920% ( 22) 00:08:32.612 14739.020 - 14844.299: 98.4147% ( 3) 00:08:32.612 14844.299 - 14949.578: 98.4375% ( 3) 00:08:32.612 14949.578 - 15054.856: 98.4830% ( 6) 00:08:32.612 15054.856 - 15160.135: 98.5589% ( 10) 00:08:32.612 15160.135 - 15265.414: 98.7561% ( 26) 00:08:32.612 15265.414 - 15370.692: 98.8623% ( 14) 00:08:32.612 15370.692 - 15475.971: 98.9002% ( 5) 00:08:32.612 15475.971 - 15581.250: 98.9229% ( 3) 00:08:32.612 15581.250 - 15686.529: 98.9457% ( 3) 00:08:32.612 15686.529 - 15791.807: 98.9684% ( 3) 00:08:32.612 15791.807 - 15897.086: 98.9912% ( 3) 00:08:32.612 15897.086 - 16002.365: 99.0140% ( 3) 00:08:32.612 16002.365 - 16107.643: 99.0291% ( 2) 00:08:32.612 25266.892 - 25372.170: 99.0595% ( 4) 00:08:32.612 25372.170 - 25477.449: 99.0898% ( 4) 00:08:32.612 25477.449 - 25582.728: 99.1201% ( 4) 00:08:32.612 25582.728 - 25688.006: 99.1505% ( 4) 00:08:32.612 25688.006 - 25793.285: 99.1808% ( 4) 00:08:32.612 25793.285 - 25898.564: 99.2112% ( 4) 00:08:32.612 25898.564 - 26003.843: 99.2415% ( 4) 00:08:32.612 26003.843 - 26109.121: 99.2718% ( 4) 00:08:32.612 26109.121 - 26214.400: 99.3022% ( 4) 00:08:32.612 26214.400 - 26319.679: 99.3325% ( 4) 00:08:32.612 26319.679 - 26424.957: 99.3629% ( 4) 00:08:32.612 26424.957 - 26530.236: 99.3932% ( 4) 00:08:32.612 26530.236 - 26635.515: 99.4235% ( 4) 00:08:32.612 26635.515 - 26740.794: 99.4539% ( 4) 00:08:32.612 26740.794 - 26846.072: 99.4918% ( 5) 00:08:32.612 26846.072 - 26951.351: 99.5146% ( 3) 00:08:32.612 32215.287 - 32425.844: 99.5221% ( 1) 00:08:32.612 32425.844 - 32636.402: 99.5904% ( 9) 00:08:32.612 32636.402 - 32846.959: 99.6435% ( 7) 00:08:32.612 32846.959 - 33057.516: 99.7118% ( 9) 00:08:32.612 33057.516 - 33268.074: 99.7649% ( 7) 00:08:32.612 33268.074 - 33478.631: 99.8255% ( 8) 00:08:32.612 33478.631 - 33689.189: 99.8862% ( 8) 00:08:32.612 33689.189 - 33899.746: 99.9469% ( 8) 00:08:32.612 33899.746 - 34110.304: 100.0000% ( 7) 00:08:32.612 00:08:32.612 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:32.612 ============================================================================== 00:08:32.612 Range in us Cumulative IO count 00:08:32.612 8001.182 - 8053.822: 0.0075% ( 1) 00:08:32.612 8053.822 - 8106.461: 0.0906% ( 11) 00:08:32.612 8106.461 - 8159.100: 0.1736% ( 11) 00:08:32.612 8159.100 - 8211.740: 0.4001% ( 30) 00:08:32.612 8211.740 - 8264.379: 0.6039% ( 27) 00:08:32.612 8264.379 - 8317.018: 0.8756% ( 36) 00:08:32.612 8317.018 - 8369.658: 1.0643% ( 25) 00:08:32.612 8369.658 - 8422.297: 1.5097% ( 59) 00:08:32.612 8422.297 - 8474.937: 1.9097% ( 53) 00:08:32.612 8474.937 - 8527.576: 2.3098% ( 53) 00:08:32.612 8527.576 - 8580.215: 2.9891% ( 90) 00:08:32.612 8580.215 - 8632.855: 3.6911% ( 93) 00:08:32.612 8632.855 - 8685.494: 5.0196% ( 176) 00:08:32.612 8685.494 - 8738.133: 6.4840% ( 194) 00:08:32.612 8738.133 - 8790.773: 8.3409% ( 246) 00:08:32.612 8790.773 - 8843.412: 10.6809% ( 310) 00:08:32.612 8843.412 - 8896.051: 13.4435% ( 366) 00:08:32.612 8896.051 - 8948.691: 16.7044% ( 432) 00:08:32.612 8948.691 - 9001.330: 19.9577% ( 431) 00:08:32.612 9001.330 - 9053.969: 23.8225% ( 512) 00:08:32.612 9053.969 - 9106.609: 28.1854% ( 578) 00:08:32.612 9106.609 - 9159.248: 33.0389% ( 643) 00:08:32.612 9159.248 - 9211.888: 38.1416% ( 676) 00:08:32.612 9211.888 - 9264.527: 43.4405% ( 702) 00:08:32.612 9264.527 - 9317.166: 48.9961% ( 736) 00:08:32.612 9317.166 - 9369.806: 54.2421% ( 695) 00:08:32.612 9369.806 - 9422.445: 59.7449% ( 729) 00:08:32.612 9422.445 - 9475.084: 64.3644% ( 612) 00:08:32.612 9475.084 - 9527.724: 68.3877% ( 533) 00:08:32.612 9527.724 - 9580.363: 72.0637% ( 487) 00:08:32.612 9580.363 - 9633.002: 75.1057% ( 403) 00:08:32.612 9633.002 - 9685.642: 77.4230% ( 307) 00:08:32.612 9685.642 - 9738.281: 79.3101% ( 250) 00:08:32.612 9738.281 - 9790.920: 81.1066% ( 238) 00:08:32.612 9790.920 - 9843.560: 82.7672% ( 220) 00:08:32.612 9843.560 - 9896.199: 84.0806% ( 174) 00:08:32.612 9896.199 - 9948.839: 85.5903% ( 200) 00:08:32.612 9948.839 - 10001.478: 86.5187% ( 123) 00:08:32.612 10001.478 - 10054.117: 87.4698% ( 126) 00:08:32.612 10054.117 - 10106.757: 88.5719% ( 146) 00:08:32.612 10106.757 - 10159.396: 89.1153% ( 72) 00:08:32.612 10159.396 - 10212.035: 89.5456% ( 57) 00:08:32.612 10212.035 - 10264.675: 89.9909% ( 59) 00:08:32.612 10264.675 - 10317.314: 90.3835% ( 52) 00:08:32.612 10317.314 - 10369.953: 91.1081% ( 96) 00:08:32.612 10369.953 - 10422.593: 91.9158% ( 107) 00:08:32.612 10422.593 - 10475.232: 92.2781% ( 48) 00:08:32.612 10475.232 - 10527.871: 92.6329% ( 47) 00:08:32.612 10527.871 - 10580.511: 92.8668% ( 31) 00:08:32.612 10580.511 - 10633.150: 93.0178% ( 20) 00:08:32.612 10633.150 - 10685.790: 93.1084% ( 12) 00:08:32.612 10685.790 - 10738.429: 93.1612% ( 7) 00:08:32.613 10738.429 - 10791.068: 93.2065% ( 6) 00:08:32.613 10791.068 - 10843.708: 93.2367% ( 4) 00:08:32.613 10843.708 - 10896.347: 93.2745% ( 5) 00:08:32.613 10896.347 - 10948.986: 93.2971% ( 3) 00:08:32.613 10948.986 - 11001.626: 93.3197% ( 3) 00:08:32.613 11001.626 - 11054.265: 93.3575% ( 5) 00:08:32.613 11054.265 - 11106.904: 93.3877% ( 4) 00:08:32.613 11106.904 - 11159.544: 93.4254% ( 5) 00:08:32.613 11159.544 - 11212.183: 93.4556% ( 4) 00:08:32.613 11212.183 - 11264.822: 93.5009% ( 6) 00:08:32.613 11264.822 - 11317.462: 93.5160% ( 2) 00:08:32.613 11317.462 - 11370.101: 93.5613% ( 6) 00:08:32.613 11370.101 - 11422.741: 93.6292% ( 9) 00:08:32.613 11422.741 - 11475.380: 93.7198% ( 12) 00:08:32.613 11475.380 - 11528.019: 93.7651% ( 6) 00:08:32.613 11528.019 - 11580.659: 93.8179% ( 7) 00:08:32.613 11580.659 - 11633.298: 93.8708% ( 7) 00:08:32.613 11633.298 - 11685.937: 93.9538% ( 11) 00:08:32.613 11685.937 - 11738.577: 94.0368% ( 11) 00:08:32.613 11738.577 - 11791.216: 94.1199% ( 11) 00:08:32.613 11791.216 - 11843.855: 94.1954% ( 10) 00:08:32.613 11843.855 - 11896.495: 94.2708% ( 10) 00:08:32.613 11896.495 - 11949.134: 94.3690% ( 13) 00:08:32.613 11949.134 - 12001.773: 94.5954% ( 30) 00:08:32.613 12001.773 - 12054.413: 94.7690% ( 23) 00:08:32.613 12054.413 - 12107.052: 94.9351% ( 22) 00:08:32.613 12107.052 - 12159.692: 95.0861% ( 20) 00:08:32.613 12159.692 - 12212.331: 95.2672% ( 24) 00:08:32.613 12212.331 - 12264.970: 95.4635% ( 26) 00:08:32.613 12264.970 - 12317.610: 95.6673% ( 27) 00:08:32.613 12317.610 - 12370.249: 95.9768% ( 41) 00:08:32.613 12370.249 - 12422.888: 96.1504% ( 23) 00:08:32.613 12422.888 - 12475.528: 96.2787% ( 17) 00:08:32.613 12475.528 - 12528.167: 96.4221% ( 19) 00:08:32.613 12528.167 - 12580.806: 96.5051% ( 11) 00:08:32.613 12580.806 - 12633.446: 96.5429% ( 5) 00:08:32.613 12633.446 - 12686.085: 96.5504% ( 1) 00:08:32.613 12686.085 - 12738.724: 96.5882% ( 5) 00:08:32.613 12738.724 - 12791.364: 96.6335% ( 6) 00:08:32.613 12791.364 - 12844.003: 96.6486% ( 2) 00:08:32.613 12844.003 - 12896.643: 96.6787% ( 4) 00:08:32.613 12896.643 - 12949.282: 96.7089% ( 4) 00:08:32.613 12949.282 - 13001.921: 96.7316% ( 3) 00:08:32.613 13001.921 - 13054.561: 96.7542% ( 3) 00:08:32.613 13054.561 - 13107.200: 96.7693% ( 2) 00:08:32.613 13107.200 - 13159.839: 96.8222% ( 7) 00:08:32.613 13159.839 - 13212.479: 96.8750% ( 7) 00:08:32.613 13212.479 - 13265.118: 96.9278% ( 7) 00:08:32.613 13265.118 - 13317.757: 96.9882% ( 8) 00:08:32.613 13317.757 - 13370.397: 97.0637% ( 10) 00:08:32.613 13370.397 - 13423.036: 97.1165% ( 7) 00:08:32.613 13423.036 - 13475.676: 97.1694% ( 7) 00:08:32.613 13475.676 - 13580.954: 97.1920% ( 3) 00:08:32.613 13580.954 - 13686.233: 97.2449% ( 7) 00:08:32.613 13686.233 - 13791.512: 97.2826% ( 5) 00:08:32.613 13791.512 - 13896.790: 97.3581% ( 10) 00:08:32.613 13896.790 - 14002.069: 97.4487% ( 12) 00:08:32.613 14002.069 - 14107.348: 97.5921% ( 19) 00:08:32.613 14107.348 - 14212.627: 97.8110% ( 29) 00:08:32.613 14212.627 - 14317.905: 97.9997% ( 25) 00:08:32.613 14317.905 - 14423.184: 98.1129% ( 15) 00:08:32.613 14423.184 - 14528.463: 98.2412% ( 17) 00:08:32.613 14528.463 - 14633.741: 98.3696% ( 17) 00:08:32.613 14633.741 - 14739.020: 98.5205% ( 20) 00:08:32.613 14739.020 - 14844.299: 98.7017% ( 24) 00:08:32.613 14844.299 - 14949.578: 98.8904% ( 25) 00:08:32.613 14949.578 - 15054.856: 98.9508% ( 8) 00:08:32.613 15054.856 - 15160.135: 98.9734% ( 3) 00:08:32.613 15160.135 - 15265.414: 98.9961% ( 3) 00:08:32.613 15265.414 - 15370.692: 99.0187% ( 3) 00:08:32.613 15370.692 - 15475.971: 99.0338% ( 2) 00:08:32.613 17265.709 - 17370.988: 99.0640% ( 4) 00:08:32.613 17370.988 - 17476.267: 99.0942% ( 4) 00:08:32.613 17476.267 - 17581.545: 99.1244% ( 4) 00:08:32.613 17581.545 - 17686.824: 99.1621% ( 5) 00:08:32.613 17686.824 - 17792.103: 99.1923% ( 4) 00:08:32.613 17792.103 - 17897.382: 99.2225% ( 4) 00:08:32.613 17897.382 - 18002.660: 99.2527% ( 4) 00:08:32.613 18002.660 - 18107.939: 99.2829% ( 4) 00:08:32.613 18107.939 - 18213.218: 99.3131% ( 4) 00:08:32.613 18213.218 - 18318.496: 99.3433% ( 4) 00:08:32.613 18318.496 - 18423.775: 99.3735% ( 4) 00:08:32.613 18423.775 - 18529.054: 99.4112% ( 5) 00:08:32.613 18529.054 - 18634.333: 99.4414% ( 4) 00:08:32.613 18634.333 - 18739.611: 99.4792% ( 5) 00:08:32.613 18739.611 - 18844.890: 99.5094% ( 4) 00:08:32.613 18844.890 - 18950.169: 99.5169% ( 1) 00:08:32.613 24740.498 - 24845.777: 99.5245% ( 1) 00:08:32.613 24845.777 - 24951.055: 99.5546% ( 4) 00:08:32.613 24951.055 - 25056.334: 99.5848% ( 4) 00:08:32.613 25056.334 - 25161.613: 99.6226% ( 5) 00:08:32.613 25161.613 - 25266.892: 99.6452% ( 3) 00:08:32.613 25266.892 - 25372.170: 99.6754% ( 4) 00:08:32.613 25372.170 - 25477.449: 99.6981% ( 3) 00:08:32.613 25477.449 - 25582.728: 99.7283% ( 4) 00:08:32.613 25582.728 - 25688.006: 99.7585% ( 4) 00:08:32.613 25688.006 - 25793.285: 99.7886% ( 4) 00:08:32.613 25793.285 - 25898.564: 99.8188% ( 4) 00:08:32.613 25898.564 - 26003.843: 99.8415% ( 3) 00:08:32.613 26003.843 - 26109.121: 99.8717% ( 4) 00:08:32.613 26109.121 - 26214.400: 99.9019% ( 4) 00:08:32.613 26214.400 - 26319.679: 99.9321% ( 4) 00:08:32.613 26319.679 - 26424.957: 99.9623% ( 4) 00:08:32.613 26424.957 - 26530.236: 99.9925% ( 4) 00:08:32.613 26530.236 - 26635.515: 100.0000% ( 1) 00:08:32.613 00:08:32.613 13:49:25 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:32.613 00:08:32.613 real 0m2.688s 00:08:32.613 user 0m2.287s 00:08:32.613 sys 0m0.296s 00:08:32.613 13:49:25 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.613 13:49:25 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:32.613 ************************************ 00:08:32.613 END TEST nvme_perf 00:08:32.613 ************************************ 00:08:32.613 13:49:25 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:32.613 13:49:25 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:32.613 13:49:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.613 13:49:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.613 ************************************ 00:08:32.613 START TEST nvme_hello_world 00:08:32.613 ************************************ 00:08:32.613 13:49:25 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:32.613 Initializing NVMe Controllers 00:08:32.613 Attached to 0000:00:10.0 00:08:32.613 Namespace ID: 1 size: 6GB 00:08:32.613 Attached to 0000:00:11.0 00:08:32.613 Namespace ID: 1 size: 5GB 00:08:32.613 Attached to 0000:00:13.0 00:08:32.613 Namespace ID: 1 size: 1GB 00:08:32.613 Attached to 0000:00:12.0 00:08:32.613 Namespace ID: 1 size: 4GB 00:08:32.613 Namespace ID: 2 size: 4GB 00:08:32.613 Namespace ID: 3 size: 4GB 00:08:32.613 Initialization complete. 00:08:32.613 INFO: using host memory buffer for IO 00:08:32.613 Hello world! 00:08:32.613 INFO: using host memory buffer for IO 00:08:32.613 Hello world! 00:08:32.613 INFO: using host memory buffer for IO 00:08:32.613 Hello world! 00:08:32.613 INFO: using host memory buffer for IO 00:08:32.613 Hello world! 00:08:32.613 INFO: using host memory buffer for IO 00:08:32.613 Hello world! 00:08:32.613 INFO: using host memory buffer for IO 00:08:32.613 Hello world! 00:08:32.873 ************************************ 00:08:32.873 END TEST nvme_hello_world 00:08:32.873 ************************************ 00:08:32.873 00:08:32.873 real 0m0.304s 00:08:32.873 user 0m0.117s 00:08:32.873 sys 0m0.146s 00:08:32.873 13:49:25 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.873 13:49:25 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:32.873 13:49:25 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:32.873 13:49:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.873 13:49:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.873 13:49:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.873 ************************************ 00:08:32.873 START TEST nvme_sgl 00:08:32.873 ************************************ 00:08:32.873 13:49:25 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:33.132 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:33.132 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:33.132 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:33.132 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:33.132 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:33.132 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:33.133 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:33.133 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:33.133 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:33.133 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:33.133 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:33.133 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:33.133 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:33.133 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:33.133 NVMe Readv/Writev Request test 00:08:33.133 Attached to 0000:00:10.0 00:08:33.133 Attached to 0000:00:11.0 00:08:33.133 Attached to 0000:00:13.0 00:08:33.133 Attached to 0000:00:12.0 00:08:33.133 0000:00:10.0: build_io_request_2 test passed 00:08:33.133 0000:00:10.0: build_io_request_4 test passed 00:08:33.133 0000:00:10.0: build_io_request_5 test passed 00:08:33.133 0000:00:10.0: build_io_request_6 test passed 00:08:33.133 0000:00:10.0: build_io_request_7 test passed 00:08:33.133 0000:00:10.0: build_io_request_10 test passed 00:08:33.133 0000:00:11.0: build_io_request_2 test passed 00:08:33.133 0000:00:11.0: build_io_request_4 test passed 00:08:33.133 0000:00:11.0: build_io_request_5 test passed 00:08:33.133 0000:00:11.0: build_io_request_6 test passed 00:08:33.133 0000:00:11.0: build_io_request_7 test passed 00:08:33.133 0000:00:11.0: build_io_request_10 test passed 00:08:33.133 Cleaning up... 00:08:33.133 ************************************ 00:08:33.133 END TEST nvme_sgl 00:08:33.133 ************************************ 00:08:33.133 00:08:33.133 real 0m0.361s 00:08:33.133 user 0m0.175s 00:08:33.133 sys 0m0.143s 00:08:33.133 13:49:26 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.133 13:49:26 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:33.133 13:49:26 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:33.133 13:49:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.133 13:49:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.133 13:49:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.133 ************************************ 00:08:33.133 START TEST nvme_e2edp 00:08:33.133 ************************************ 00:08:33.133 13:49:26 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:33.702 NVMe Write/Read with End-to-End data protection test 00:08:33.702 Attached to 0000:00:10.0 00:08:33.702 Attached to 0000:00:11.0 00:08:33.702 Attached to 0000:00:13.0 00:08:33.702 Attached to 0000:00:12.0 00:08:33.702 Cleaning up... 00:08:33.702 00:08:33.702 real 0m0.294s 00:08:33.702 user 0m0.107s 00:08:33.702 sys 0m0.143s 00:08:33.702 ************************************ 00:08:33.702 END TEST nvme_e2edp 00:08:33.702 ************************************ 00:08:33.702 13:49:26 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.702 13:49:26 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:33.702 13:49:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:33.702 13:49:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.702 13:49:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.702 13:49:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.702 ************************************ 00:08:33.702 START TEST nvme_reserve 00:08:33.702 ************************************ 00:08:33.702 13:49:26 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:33.961 ===================================================== 00:08:33.961 NVMe Controller at PCI bus 0, device 16, function 0 00:08:33.961 ===================================================== 00:08:33.961 Reservations: Not Supported 00:08:33.961 ===================================================== 00:08:33.961 NVMe Controller at PCI bus 0, device 17, function 0 00:08:33.961 ===================================================== 00:08:33.961 Reservations: Not Supported 00:08:33.961 ===================================================== 00:08:33.961 NVMe Controller at PCI bus 0, device 19, function 0 00:08:33.961 ===================================================== 00:08:33.961 Reservations: Not Supported 00:08:33.961 ===================================================== 00:08:33.961 NVMe Controller at PCI bus 0, device 18, function 0 00:08:33.961 ===================================================== 00:08:33.961 Reservations: Not Supported 00:08:33.961 Reservation test passed 00:08:33.961 00:08:33.961 real 0m0.310s 00:08:33.961 user 0m0.107s 00:08:33.961 sys 0m0.148s 00:08:33.961 ************************************ 00:08:33.961 END TEST nvme_reserve 00:08:33.961 ************************************ 00:08:33.961 13:49:26 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.961 13:49:26 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:33.961 13:49:26 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:33.961 13:49:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.961 13:49:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.961 13:49:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.961 ************************************ 00:08:33.961 START TEST nvme_err_injection 00:08:33.961 ************************************ 00:08:33.961 13:49:26 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:34.221 NVMe Error Injection test 00:08:34.221 Attached to 0000:00:10.0 00:08:34.221 Attached to 0000:00:11.0 00:08:34.221 Attached to 0000:00:13.0 00:08:34.221 Attached to 0000:00:12.0 00:08:34.221 0000:00:12.0: get features failed as expected 00:08:34.221 0000:00:10.0: get features failed as expected 00:08:34.221 0000:00:11.0: get features failed as expected 00:08:34.221 0000:00:13.0: get features failed as expected 00:08:34.221 0000:00:10.0: get features successfully as expected 00:08:34.221 0000:00:11.0: get features successfully as expected 00:08:34.221 0000:00:13.0: get features successfully as expected 00:08:34.221 0000:00:12.0: get features successfully as expected 00:08:34.221 0000:00:10.0: read failed as expected 00:08:34.221 0000:00:11.0: read failed as expected 00:08:34.221 0000:00:13.0: read failed as expected 00:08:34.221 0000:00:12.0: read failed as expected 00:08:34.221 0000:00:10.0: read successfully as expected 00:08:34.221 0000:00:11.0: read successfully as expected 00:08:34.221 0000:00:13.0: read successfully as expected 00:08:34.221 0000:00:12.0: read successfully as expected 00:08:34.221 Cleaning up... 00:08:34.221 00:08:34.221 real 0m0.301s 00:08:34.221 user 0m0.115s 00:08:34.221 sys 0m0.144s 00:08:34.221 ************************************ 00:08:34.221 END TEST nvme_err_injection 00:08:34.221 ************************************ 00:08:34.221 13:49:27 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.221 13:49:27 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:34.480 13:49:27 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:34.480 13:49:27 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:34.480 13:49:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.480 13:49:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.480 ************************************ 00:08:34.480 START TEST nvme_overhead 00:08:34.480 ************************************ 00:08:34.480 13:49:27 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:35.860 Initializing NVMe Controllers 00:08:35.860 Attached to 0000:00:10.0 00:08:35.860 Attached to 0000:00:11.0 00:08:35.860 Attached to 0000:00:13.0 00:08:35.860 Attached to 0000:00:12.0 00:08:35.860 Initialization complete. Launching workers. 00:08:35.860 submit (in ns) avg, min, max = 13566.7, 10751.0, 49489.2 00:08:35.860 complete (in ns) avg, min, max = 8139.2, 7729.3, 103996.0 00:08:35.860 00:08:35.860 Submit histogram 00:08:35.860 ================ 00:08:35.860 Range in us Cumulative Count 00:08:35.860 10.744 - 10.795: 0.0338% ( 2) 00:08:35.860 10.795 - 10.847: 0.0507% ( 1) 00:08:35.860 11.001 - 11.052: 0.0676% ( 1) 00:08:35.860 11.206 - 11.258: 0.1014% ( 2) 00:08:35.860 11.258 - 11.309: 0.1183% ( 1) 00:08:35.860 11.361 - 11.412: 0.1352% ( 1) 00:08:35.860 11.412 - 11.463: 0.1522% ( 1) 00:08:35.860 11.926 - 11.978: 0.1691% ( 1) 00:08:35.860 12.337 - 12.389: 0.1860% ( 1) 00:08:35.860 12.749 - 12.800: 0.2367% ( 3) 00:08:35.860 12.800 - 12.851: 0.7439% ( 30) 00:08:35.860 12.851 - 12.903: 2.5697% ( 108) 00:08:35.860 12.903 - 12.954: 6.1708% ( 213) 00:08:35.860 12.954 - 13.006: 11.6653% ( 325) 00:08:35.860 13.006 - 13.057: 17.6839% ( 356) 00:08:35.860 13.057 - 13.108: 25.5114% ( 463) 00:08:35.860 13.108 - 13.160: 33.5757% ( 477) 00:08:35.860 13.160 - 13.263: 48.5545% ( 886) 00:08:35.860 13.263 - 13.365: 63.4658% ( 882) 00:08:35.860 13.365 - 13.468: 74.0321% ( 625) 00:08:35.860 13.468 - 13.571: 82.1302% ( 479) 00:08:35.860 13.571 - 13.674: 87.7092% ( 330) 00:08:35.860 13.674 - 13.777: 91.1412% ( 203) 00:08:35.860 13.777 - 13.880: 93.2713% ( 126) 00:08:35.860 13.880 - 13.982: 94.0828% ( 48) 00:08:35.860 13.982 - 14.085: 94.5731% ( 29) 00:08:35.860 14.085 - 14.188: 94.7591% ( 11) 00:08:35.860 14.188 - 14.291: 94.9620% ( 12) 00:08:35.860 14.291 - 14.394: 94.9789% ( 1) 00:08:35.860 14.394 - 14.496: 95.0127% ( 2) 00:08:35.860 14.496 - 14.599: 95.0465% ( 2) 00:08:35.860 15.010 - 15.113: 95.0634% ( 1) 00:08:35.860 15.216 - 15.319: 95.0803% ( 1) 00:08:35.860 15.524 - 15.627: 95.1310% ( 3) 00:08:35.860 15.833 - 15.936: 95.1479% ( 1) 00:08:35.860 15.936 - 16.039: 95.1648% ( 1) 00:08:35.860 16.039 - 16.141: 95.1986% ( 2) 00:08:35.860 16.141 - 16.244: 95.2325% ( 2) 00:08:35.860 16.347 - 16.450: 95.2494% ( 1) 00:08:35.860 16.758 - 16.861: 95.3339% ( 5) 00:08:35.860 16.861 - 16.964: 95.4184% ( 5) 00:08:35.860 16.964 - 17.067: 95.5537% ( 8) 00:08:35.860 17.067 - 17.169: 95.8242% ( 16) 00:08:35.860 17.169 - 17.272: 96.0440% ( 13) 00:08:35.860 17.272 - 17.375: 96.2975% ( 15) 00:08:35.860 17.375 - 17.478: 96.5850% ( 17) 00:08:35.860 17.478 - 17.581: 96.9400% ( 21) 00:08:35.860 17.581 - 17.684: 97.2274% ( 17) 00:08:35.860 17.684 - 17.786: 97.4303% ( 12) 00:08:35.860 17.786 - 17.889: 97.4979% ( 4) 00:08:35.860 17.889 - 17.992: 97.5655% ( 4) 00:08:35.860 17.992 - 18.095: 97.6500% ( 5) 00:08:35.860 18.095 - 18.198: 97.7008% ( 3) 00:08:35.860 18.198 - 18.300: 97.7515% ( 3) 00:08:35.860 18.300 - 18.403: 97.8191% ( 4) 00:08:35.860 18.403 - 18.506: 97.9036% ( 5) 00:08:35.860 18.506 - 18.609: 98.0051% ( 6) 00:08:35.860 18.609 - 18.712: 98.1741% ( 10) 00:08:35.860 18.712 - 18.814: 98.2249% ( 3) 00:08:35.860 18.814 - 18.917: 98.4108% ( 11) 00:08:35.860 18.917 - 19.020: 98.5123% ( 6) 00:08:35.860 19.020 - 19.123: 98.6306% ( 7) 00:08:35.860 19.123 - 19.226: 98.7320% ( 6) 00:08:35.860 19.226 - 19.329: 98.8166% ( 5) 00:08:35.860 19.329 - 19.431: 98.9349% ( 7) 00:08:35.860 19.431 - 19.534: 99.0363% ( 6) 00:08:35.860 19.534 - 19.637: 99.1209% ( 5) 00:08:35.860 19.637 - 19.740: 99.2730% ( 9) 00:08:35.860 19.740 - 19.843: 99.3407% ( 4) 00:08:35.860 19.843 - 19.945: 99.3745% ( 2) 00:08:35.861 19.945 - 20.048: 99.3914% ( 1) 00:08:35.861 20.048 - 20.151: 99.4928% ( 6) 00:08:35.861 20.151 - 20.254: 99.5097% ( 1) 00:08:35.861 20.254 - 20.357: 99.5435% ( 2) 00:08:35.861 20.357 - 20.459: 99.5604% ( 1) 00:08:35.861 20.459 - 20.562: 99.5943% ( 2) 00:08:35.861 20.973 - 21.076: 99.6281% ( 2) 00:08:35.861 21.076 - 21.179: 99.6788% ( 3) 00:08:35.861 21.693 - 21.796: 99.6957% ( 1) 00:08:35.861 22.618 - 22.721: 99.7126% ( 1) 00:08:35.861 23.133 - 23.235: 99.7295% ( 1) 00:08:35.861 23.338 - 23.441: 99.7464% ( 1) 00:08:35.861 23.647 - 23.749: 99.7802% ( 2) 00:08:35.861 24.161 - 24.263: 99.7971% ( 1) 00:08:35.861 25.086 - 25.189: 99.8140% ( 1) 00:08:35.861 25.806 - 25.908: 99.8309% ( 1) 00:08:35.861 27.348 - 27.553: 99.8478% ( 1) 00:08:35.861 29.610 - 29.815: 99.8648% ( 1) 00:08:35.861 32.077 - 32.283: 99.8817% ( 1) 00:08:35.861 36.395 - 36.601: 99.8986% ( 1) 00:08:35.861 40.919 - 41.124: 99.9324% ( 2) 00:08:35.861 41.536 - 41.741: 99.9493% ( 1) 00:08:35.861 44.209 - 44.414: 99.9662% ( 1) 00:08:35.861 48.527 - 48.733: 99.9831% ( 1) 00:08:35.861 49.349 - 49.555: 100.0000% ( 1) 00:08:35.861 00:08:35.861 Complete histogram 00:08:35.861 ================== 00:08:35.861 Range in us Cumulative Count 00:08:35.861 7.711 - 7.762: 0.0338% ( 2) 00:08:35.861 7.762 - 7.814: 2.2147% ( 129) 00:08:35.861 7.814 - 7.865: 14.4548% ( 724) 00:08:35.861 7.865 - 7.916: 35.4522% ( 1242) 00:08:35.861 7.916 - 7.968: 55.6213% ( 1193) 00:08:35.861 7.968 - 8.019: 70.7016% ( 892) 00:08:35.861 8.019 - 8.071: 80.7946% ( 597) 00:08:35.861 8.071 - 8.122: 87.3880% ( 390) 00:08:35.861 8.122 - 8.173: 91.6822% ( 254) 00:08:35.861 8.173 - 8.225: 94.1843% ( 148) 00:08:35.861 8.225 - 8.276: 95.3677% ( 70) 00:08:35.861 8.276 - 8.328: 95.9425% ( 34) 00:08:35.861 8.328 - 8.379: 96.2806% ( 20) 00:08:35.861 8.379 - 8.431: 96.4328% ( 9) 00:08:35.861 8.431 - 8.482: 96.5680% ( 8) 00:08:35.861 8.482 - 8.533: 96.7371% ( 10) 00:08:35.861 8.533 - 8.585: 97.1598% ( 25) 00:08:35.861 8.585 - 8.636: 97.5317% ( 22) 00:08:35.861 8.636 - 8.688: 97.7177% ( 11) 00:08:35.861 8.688 - 8.739: 97.9544% ( 14) 00:08:35.861 8.739 - 8.790: 97.9882% ( 2) 00:08:35.861 8.790 - 8.842: 98.0896% ( 6) 00:08:35.861 8.842 - 8.893: 98.1234% ( 2) 00:08:35.861 8.893 - 8.945: 98.1741% ( 3) 00:08:35.861 8.945 - 8.996: 98.1910% ( 1) 00:08:35.861 9.150 - 9.202: 98.2079% ( 1) 00:08:35.861 9.202 - 9.253: 98.2249% ( 1) 00:08:35.861 10.178 - 10.230: 98.2418% ( 1) 00:08:35.861 10.435 - 10.487: 98.2587% ( 1) 00:08:35.861 11.206 - 11.258: 98.2756% ( 1) 00:08:35.861 11.515 - 11.566: 98.2925% ( 1) 00:08:35.861 11.618 - 11.669: 98.3094% ( 1) 00:08:35.861 12.080 - 12.132: 98.3263% ( 1) 00:08:35.861 12.235 - 12.286: 98.3432% ( 1) 00:08:35.861 12.389 - 12.440: 98.3601% ( 1) 00:08:35.861 12.492 - 12.543: 98.3939% ( 2) 00:08:35.861 12.543 - 12.594: 98.4108% ( 1) 00:08:35.861 12.594 - 12.646: 98.4277% ( 1) 00:08:35.861 12.646 - 12.697: 98.4446% ( 1) 00:08:35.861 12.851 - 12.903: 98.4615% ( 1) 00:08:35.861 12.954 - 13.006: 98.5123% ( 3) 00:08:35.861 13.057 - 13.108: 98.5292% ( 1) 00:08:35.861 13.108 - 13.160: 98.5630% ( 2) 00:08:35.861 13.160 - 13.263: 98.6306% ( 4) 00:08:35.861 13.263 - 13.365: 98.6982% ( 4) 00:08:35.861 13.365 - 13.468: 98.8166% ( 7) 00:08:35.861 13.468 - 13.571: 98.9687% ( 9) 00:08:35.861 13.571 - 13.674: 99.0702% ( 6) 00:08:35.861 13.674 - 13.777: 99.1716% ( 6) 00:08:35.861 13.777 - 13.880: 99.2899% ( 7) 00:08:35.861 13.880 - 13.982: 99.3914% ( 6) 00:08:35.861 13.982 - 14.085: 99.4083% ( 1) 00:08:35.861 14.085 - 14.188: 99.4421% ( 2) 00:08:35.861 14.188 - 14.291: 99.4759% ( 2) 00:08:35.861 14.291 - 14.394: 99.5097% ( 2) 00:08:35.861 14.496 - 14.599: 99.5266% ( 1) 00:08:35.861 14.599 - 14.702: 99.5435% ( 1) 00:08:35.861 14.702 - 14.805: 99.5773% ( 2) 00:08:35.861 14.805 - 14.908: 99.5943% ( 1) 00:08:35.861 14.908 - 15.010: 99.6112% ( 1) 00:08:35.861 15.113 - 15.216: 99.6281% ( 1) 00:08:35.861 16.244 - 16.347: 99.6450% ( 1) 00:08:35.861 16.450 - 16.553: 99.6619% ( 1) 00:08:35.861 16.758 - 16.861: 99.6788% ( 1) 00:08:35.861 17.169 - 17.272: 99.6957% ( 1) 00:08:35.861 19.329 - 19.431: 99.7126% ( 1) 00:08:35.861 20.357 - 20.459: 99.7295% ( 1) 00:08:35.861 20.973 - 21.076: 99.7464% ( 1) 00:08:35.861 21.179 - 21.282: 99.7802% ( 2) 00:08:35.861 22.104 - 22.207: 99.7971% ( 1) 00:08:35.861 23.030 - 23.133: 99.8140% ( 1) 00:08:35.861 23.544 - 23.647: 99.8309% ( 1) 00:08:35.861 23.749 - 23.852: 99.8478% ( 1) 00:08:35.861 24.058 - 24.161: 99.8648% ( 1) 00:08:35.861 24.366 - 24.469: 99.8817% ( 1) 00:08:35.861 27.348 - 27.553: 99.8986% ( 1) 00:08:35.861 38.451 - 38.657: 99.9155% ( 1) 00:08:35.861 40.919 - 41.124: 99.9324% ( 1) 00:08:35.861 41.124 - 41.330: 99.9493% ( 1) 00:08:35.861 48.321 - 48.527: 99.9662% ( 1) 00:08:35.861 54.696 - 55.107: 99.9831% ( 1) 00:08:35.861 103.634 - 104.045: 100.0000% ( 1) 00:08:35.861 00:08:35.861 ************************************ 00:08:35.861 END TEST nvme_overhead 00:08:35.861 ************************************ 00:08:35.861 00:08:35.861 real 0m1.283s 00:08:35.861 user 0m1.089s 00:08:35.861 sys 0m0.147s 00:08:35.861 13:49:28 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.861 13:49:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:35.861 13:49:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:35.861 13:49:28 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:35.861 13:49:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.861 13:49:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.861 ************************************ 00:08:35.861 START TEST nvme_arbitration 00:08:35.861 ************************************ 00:08:35.861 13:49:28 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:39.151 Initializing NVMe Controllers 00:08:39.151 Attached to 0000:00:10.0 00:08:39.151 Attached to 0000:00:11.0 00:08:39.151 Attached to 0000:00:13.0 00:08:39.151 Attached to 0000:00:12.0 00:08:39.151 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:39.151 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:39.151 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:39.151 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:39.151 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:39.151 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:39.151 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:39.151 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:39.151 Initialization complete. Launching workers. 00:08:39.151 Starting thread on core 1 with urgent priority queue 00:08:39.151 Starting thread on core 2 with urgent priority queue 00:08:39.151 Starting thread on core 3 with urgent priority queue 00:08:39.151 Starting thread on core 0 with urgent priority queue 00:08:39.151 QEMU NVMe Ctrl (12340 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:08:39.151 QEMU NVMe Ctrl (12342 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:08:39.151 QEMU NVMe Ctrl (12341 ) core 1: 597.33 IO/s 167.41 secs/100000 ios 00:08:39.151 QEMU NVMe Ctrl (12342 ) core 1: 597.33 IO/s 167.41 secs/100000 ios 00:08:39.151 QEMU NVMe Ctrl (12343 ) core 2: 576.00 IO/s 173.61 secs/100000 ios 00:08:39.151 QEMU NVMe Ctrl (12342 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:08:39.151 ======================================================== 00:08:39.151 00:08:39.151 00:08:39.151 real 0m3.439s 00:08:39.151 user 0m9.390s 00:08:39.151 sys 0m0.158s 00:08:39.151 ************************************ 00:08:39.151 END TEST nvme_arbitration 00:08:39.151 ************************************ 00:08:39.151 13:49:32 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.151 13:49:32 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:39.151 13:49:32 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:39.151 13:49:32 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:39.151 13:49:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.151 13:49:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.151 ************************************ 00:08:39.151 START TEST nvme_single_aen 00:08:39.151 ************************************ 00:08:39.151 13:49:32 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:39.410 Asynchronous Event Request test 00:08:39.410 Attached to 0000:00:10.0 00:08:39.410 Attached to 0000:00:11.0 00:08:39.410 Attached to 0000:00:13.0 00:08:39.410 Attached to 0000:00:12.0 00:08:39.410 Reset controller to setup AER completions for this process 00:08:39.410 Registering asynchronous event callbacks... 00:08:39.410 Getting orig temperature thresholds of all controllers 00:08:39.410 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.410 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.410 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.410 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.410 Setting all controllers temperature threshold low to trigger AER 00:08:39.410 Waiting for all controllers temperature threshold to be set lower 00:08:39.410 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.410 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:39.410 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.410 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:39.410 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.410 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:39.410 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.410 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:39.411 Waiting for all controllers to trigger AER and reset threshold 00:08:39.411 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.411 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.411 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.411 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.411 Cleaning up... 00:08:39.670 ************************************ 00:08:39.670 END TEST nvme_single_aen 00:08:39.670 ************************************ 00:08:39.670 00:08:39.670 real 0m0.304s 00:08:39.670 user 0m0.102s 00:08:39.670 sys 0m0.158s 00:08:39.670 13:49:32 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.670 13:49:32 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:39.670 13:49:32 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:39.670 13:49:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.670 13:49:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.670 13:49:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.670 ************************************ 00:08:39.670 START TEST nvme_doorbell_aers 00:08:39.670 ************************************ 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:39.670 13:49:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:39.928 [2024-12-11 13:49:32.939820] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:08:49.906 Executing: test_write_invalid_db 00:08:49.906 Waiting for AER completion... 00:08:49.906 Failure: test_write_invalid_db 00:08:49.906 00:08:49.906 Executing: test_invalid_db_write_overflow_sq 00:08:49.906 Waiting for AER completion... 00:08:49.906 Failure: test_invalid_db_write_overflow_sq 00:08:49.906 00:08:49.906 Executing: test_invalid_db_write_overflow_cq 00:08:49.906 Waiting for AER completion... 00:08:49.906 Failure: test_invalid_db_write_overflow_cq 00:08:49.906 00:08:49.906 13:49:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:49.906 13:49:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:50.165 [2024-12-11 13:49:42.992208] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:00.140 Executing: test_write_invalid_db 00:09:00.140 Waiting for AER completion... 00:09:00.140 Failure: test_write_invalid_db 00:09:00.140 00:09:00.140 Executing: test_invalid_db_write_overflow_sq 00:09:00.140 Waiting for AER completion... 00:09:00.140 Failure: test_invalid_db_write_overflow_sq 00:09:00.140 00:09:00.140 Executing: test_invalid_db_write_overflow_cq 00:09:00.140 Waiting for AER completion... 00:09:00.140 Failure: test_invalid_db_write_overflow_cq 00:09:00.140 00:09:00.140 13:49:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:00.140 13:49:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:00.140 [2024-12-11 13:49:53.050997] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:10.120 Executing: test_write_invalid_db 00:09:10.120 Waiting for AER completion... 00:09:10.120 Failure: test_write_invalid_db 00:09:10.120 00:09:10.120 Executing: test_invalid_db_write_overflow_sq 00:09:10.120 Waiting for AER completion... 00:09:10.120 Failure: test_invalid_db_write_overflow_sq 00:09:10.120 00:09:10.120 Executing: test_invalid_db_write_overflow_cq 00:09:10.120 Waiting for AER completion... 00:09:10.120 Failure: test_invalid_db_write_overflow_cq 00:09:10.120 00:09:10.120 13:50:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:10.120 13:50:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:10.120 [2024-12-11 13:50:03.104334] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.099 Executing: test_write_invalid_db 00:09:20.099 Waiting for AER completion... 00:09:20.099 Failure: test_write_invalid_db 00:09:20.099 00:09:20.099 Executing: test_invalid_db_write_overflow_sq 00:09:20.099 Waiting for AER completion... 00:09:20.099 Failure: test_invalid_db_write_overflow_sq 00:09:20.099 00:09:20.099 Executing: test_invalid_db_write_overflow_cq 00:09:20.099 Waiting for AER completion... 00:09:20.099 Failure: test_invalid_db_write_overflow_cq 00:09:20.099 00:09:20.099 ************************************ 00:09:20.099 END TEST nvme_doorbell_aers 00:09:20.099 ************************************ 00:09:20.099 00:09:20.099 real 0m40.325s 00:09:20.099 user 0m28.585s 00:09:20.099 sys 0m11.387s 00:09:20.099 13:50:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.099 13:50:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:20.099 13:50:12 nvme -- nvme/nvme.sh@97 -- # uname 00:09:20.099 13:50:12 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:20.099 13:50:12 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:20.099 13:50:12 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:20.099 13:50:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.099 13:50:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.099 ************************************ 00:09:20.099 START TEST nvme_multi_aen 00:09:20.099 ************************************ 00:09:20.099 13:50:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:20.357 [2024-12-11 13:50:13.158613] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.158704] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.158721] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.160249] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.160286] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.160301] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.161814] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.161860] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.161874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.163262] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.163299] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 [2024-12-11 13:50:13.163313] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65587) is not found. Dropping the request. 00:09:20.357 Child process pid: 66098 00:09:20.616 [Child] Asynchronous Event Request test 00:09:20.616 [Child] Attached to 0000:00:10.0 00:09:20.616 [Child] Attached to 0000:00:11.0 00:09:20.616 [Child] Attached to 0000:00:13.0 00:09:20.616 [Child] Attached to 0000:00:12.0 00:09:20.616 [Child] Registering asynchronous event callbacks... 00:09:20.616 [Child] Getting orig temperature thresholds of all controllers 00:09:20.616 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.616 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.616 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.616 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.616 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:20.616 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.616 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.616 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.616 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.616 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.616 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.616 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.616 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.616 [Child] Cleaning up... 00:09:20.616 Asynchronous Event Request test 00:09:20.616 Attached to 0000:00:10.0 00:09:20.616 Attached to 0000:00:11.0 00:09:20.616 Attached to 0000:00:13.0 00:09:20.616 Attached to 0000:00:12.0 00:09:20.616 Reset controller to setup AER completions for this process 00:09:20.616 Registering asynchronous event callbacks... 00:09:20.616 Getting orig temperature thresholds of all controllers 00:09:20.616 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.616 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.616 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.616 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.616 Setting all controllers temperature threshold low to trigger AER 00:09:20.616 Waiting for all controllers temperature threshold to be set lower 00:09:20.616 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.616 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:20.616 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.616 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:20.616 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.616 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:20.616 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.616 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:20.616 Waiting for all controllers to trigger AER and reset threshold 00:09:20.616 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.616 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.616 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.616 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.616 Cleaning up... 00:09:20.616 ************************************ 00:09:20.616 END TEST nvme_multi_aen 00:09:20.616 ************************************ 00:09:20.616 00:09:20.616 real 0m0.607s 00:09:20.616 user 0m0.210s 00:09:20.616 sys 0m0.295s 00:09:20.616 13:50:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.616 13:50:13 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:20.616 13:50:13 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:20.616 13:50:13 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:20.616 13:50:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.616 13:50:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.616 ************************************ 00:09:20.616 START TEST nvme_startup 00:09:20.616 ************************************ 00:09:20.616 13:50:13 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:20.876 Initializing NVMe Controllers 00:09:20.876 Attached to 0000:00:10.0 00:09:20.876 Attached to 0000:00:11.0 00:09:20.876 Attached to 0000:00:13.0 00:09:20.876 Attached to 0000:00:12.0 00:09:20.876 Initialization complete. 00:09:20.876 Time used:187095.875 (us). 00:09:20.876 ************************************ 00:09:20.876 END TEST nvme_startup 00:09:20.876 ************************************ 00:09:20.876 00:09:20.876 real 0m0.287s 00:09:20.876 user 0m0.095s 00:09:20.876 sys 0m0.150s 00:09:20.876 13:50:13 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.876 13:50:13 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:21.135 13:50:13 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:21.135 13:50:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.135 13:50:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.135 13:50:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.135 ************************************ 00:09:21.135 START TEST nvme_multi_secondary 00:09:21.135 ************************************ 00:09:21.135 13:50:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:21.135 13:50:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=66154 00:09:21.135 13:50:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:21.135 13:50:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=66155 00:09:21.135 13:50:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:21.135 13:50:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:24.421 Initializing NVMe Controllers 00:09:24.421 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:24.421 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:24.421 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:24.421 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:24.421 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:24.421 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:24.421 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:24.421 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:24.421 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:24.421 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:24.421 Initialization complete. Launching workers. 00:09:24.421 ======================================================== 00:09:24.421 Latency(us) 00:09:24.421 Device Information : IOPS MiB/s Average min max 00:09:24.421 PCIE (0000:00:10.0) NSID 1 from core 1: 5144.90 20.10 3107.64 965.42 6986.40 00:09:24.421 PCIE (0000:00:11.0) NSID 1 from core 1: 5144.90 20.10 3109.44 991.36 7183.82 00:09:24.421 PCIE (0000:00:13.0) NSID 1 from core 1: 5144.90 20.10 3109.56 995.72 7643.14 00:09:24.421 PCIE (0000:00:12.0) NSID 1 from core 1: 5144.90 20.10 3109.95 982.14 8075.17 00:09:24.421 PCIE (0000:00:12.0) NSID 2 from core 1: 5144.90 20.10 3110.47 997.32 7182.76 00:09:24.421 PCIE (0000:00:12.0) NSID 3 from core 1: 5150.24 20.12 3107.78 991.08 7079.60 00:09:24.421 ======================================================== 00:09:24.421 Total : 30874.75 120.60 3109.14 965.42 8075.17 00:09:24.421 00:09:24.421 Initializing NVMe Controllers 00:09:24.421 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:24.421 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:24.421 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:24.421 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:24.421 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:24.421 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:24.421 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:24.421 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:24.421 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:24.421 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:24.421 Initialization complete. Launching workers. 00:09:24.421 ======================================================== 00:09:24.421 Latency(us) 00:09:24.421 Device Information : IOPS MiB/s Average min max 00:09:24.421 PCIE (0000:00:10.0) NSID 1 from core 2: 3262.15 12.74 4903.54 1342.69 11326.71 00:09:24.421 PCIE (0000:00:11.0) NSID 1 from core 2: 3262.15 12.74 4903.81 1283.90 10648.09 00:09:24.421 PCIE (0000:00:13.0) NSID 1 from core 2: 3262.15 12.74 4903.72 1298.64 11574.85 00:09:24.421 PCIE (0000:00:12.0) NSID 1 from core 2: 3262.15 12.74 4903.27 1378.78 11158.87 00:09:24.421 PCIE (0000:00:12.0) NSID 2 from core 2: 3262.15 12.74 4903.63 1103.61 12913.01 00:09:24.421 PCIE (0000:00:12.0) NSID 3 from core 2: 3262.15 12.74 4904.07 1156.58 12833.84 00:09:24.421 ======================================================== 00:09:24.421 Total : 19572.92 76.46 4903.67 1103.61 12913.01 00:09:24.421 00:09:24.680 13:50:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 66154 00:09:26.585 Initializing NVMe Controllers 00:09:26.585 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:26.585 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:26.585 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:26.585 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:26.585 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:26.585 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:26.585 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:26.585 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:26.585 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:26.585 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:26.585 Initialization complete. Launching workers. 00:09:26.585 ======================================================== 00:09:26.585 Latency(us) 00:09:26.585 Device Information : IOPS MiB/s Average min max 00:09:26.585 PCIE (0000:00:10.0) NSID 1 from core 0: 8543.00 33.37 1871.33 951.20 7681.97 00:09:26.585 PCIE (0000:00:11.0) NSID 1 from core 0: 8543.00 33.37 1872.38 976.84 7610.47 00:09:26.585 PCIE (0000:00:13.0) NSID 1 from core 0: 8543.00 33.37 1872.34 877.70 8390.18 00:09:26.585 PCIE (0000:00:12.0) NSID 1 from core 0: 8543.00 33.37 1872.30 804.74 8433.82 00:09:26.585 PCIE (0000:00:12.0) NSID 2 from core 0: 8543.00 33.37 1872.27 758.88 7546.20 00:09:26.585 PCIE (0000:00:12.0) NSID 3 from core 0: 8543.00 33.37 1872.25 766.36 7988.08 00:09:26.585 ======================================================== 00:09:26.585 Total : 51257.97 200.23 1872.15 758.88 8433.82 00:09:26.585 00:09:26.585 13:50:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 66155 00:09:26.585 13:50:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=66230 00:09:26.585 13:50:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=66231 00:09:26.585 13:50:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:26.586 13:50:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:26.586 13:50:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:30.777 Initializing NVMe Controllers 00:09:30.777 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:30.777 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:30.777 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:30.777 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:30.777 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:30.777 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:30.777 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:30.777 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:30.777 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:30.777 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:30.777 Initialization complete. Launching workers. 00:09:30.777 ======================================================== 00:09:30.777 Latency(us) 00:09:30.778 Device Information : IOPS MiB/s Average min max 00:09:30.778 PCIE (0000:00:10.0) NSID 1 from core 1: 5373.60 20.99 2975.28 1004.95 6382.89 00:09:30.778 PCIE (0000:00:11.0) NSID 1 from core 1: 5373.60 20.99 2977.00 1020.81 6400.48 00:09:30.778 PCIE (0000:00:13.0) NSID 1 from core 1: 5373.60 20.99 2977.07 1031.85 6461.49 00:09:30.778 PCIE (0000:00:12.0) NSID 1 from core 1: 5373.60 20.99 2977.29 1029.23 6890.20 00:09:30.778 PCIE (0000:00:12.0) NSID 2 from core 1: 5373.60 20.99 2977.56 1038.85 7123.92 00:09:30.778 PCIE (0000:00:12.0) NSID 3 from core 1: 5373.60 20.99 2977.67 1027.20 6494.07 00:09:30.778 ======================================================== 00:09:30.778 Total : 32241.59 125.94 2976.98 1004.95 7123.92 00:09:30.778 00:09:30.778 Initializing NVMe Controllers 00:09:30.778 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:30.778 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:30.778 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:30.778 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:30.778 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:30.778 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:30.778 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:30.778 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:30.778 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:30.778 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:30.778 Initialization complete. Launching workers. 00:09:30.778 ======================================================== 00:09:30.778 Latency(us) 00:09:30.778 Device Information : IOPS MiB/s Average min max 00:09:30.778 PCIE (0000:00:10.0) NSID 1 from core 0: 5261.67 20.55 3038.51 983.57 8217.50 00:09:30.778 PCIE (0000:00:11.0) NSID 1 from core 0: 5261.67 20.55 3040.28 1008.01 7969.35 00:09:30.778 PCIE (0000:00:13.0) NSID 1 from core 0: 5261.67 20.55 3040.24 1058.36 7889.42 00:09:30.778 PCIE (0000:00:12.0) NSID 1 from core 0: 5261.67 20.55 3040.20 1055.25 7792.04 00:09:30.778 PCIE (0000:00:12.0) NSID 2 from core 0: 5261.67 20.55 3040.31 1064.72 7711.87 00:09:30.778 PCIE (0000:00:12.0) NSID 3 from core 0: 5261.67 20.55 3040.28 1039.07 7470.56 00:09:30.778 ======================================================== 00:09:30.778 Total : 31570.04 123.32 3039.97 983.57 8217.50 00:09:30.778 00:09:32.156 Initializing NVMe Controllers 00:09:32.156 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:32.156 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:32.156 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:32.156 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:32.156 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:32.156 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:32.156 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:32.156 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:32.156 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:32.156 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:32.156 Initialization complete. Launching workers. 00:09:32.156 ======================================================== 00:09:32.156 Latency(us) 00:09:32.156 Device Information : IOPS MiB/s Average min max 00:09:32.156 PCIE (0000:00:10.0) NSID 1 from core 2: 3043.32 11.89 5256.29 1052.11 12422.09 00:09:32.156 PCIE (0000:00:11.0) NSID 1 from core 2: 3046.52 11.90 5251.70 1079.16 13919.98 00:09:32.156 PCIE (0000:00:13.0) NSID 1 from core 2: 3046.52 11.90 5251.62 1053.02 14190.08 00:09:32.156 PCIE (0000:00:12.0) NSID 1 from core 2: 3046.52 11.90 5251.27 1027.40 14262.31 00:09:32.156 PCIE (0000:00:12.0) NSID 2 from core 2: 3046.52 11.90 5251.46 1038.47 14047.33 00:09:32.156 PCIE (0000:00:12.0) NSID 3 from core 2: 3046.52 11.90 5251.12 1079.45 12327.52 00:09:32.156 ======================================================== 00:09:32.156 Total : 18275.93 71.39 5252.24 1027.40 14262.31 00:09:32.156 00:09:32.156 ************************************ 00:09:32.156 END TEST nvme_multi_secondary 00:09:32.156 ************************************ 00:09:32.156 13:50:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 66230 00:09:32.156 13:50:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 66231 00:09:32.156 00:09:32.156 real 0m10.954s 00:09:32.156 user 0m18.561s 00:09:32.156 sys 0m1.019s 00:09:32.156 13:50:24 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.156 13:50:24 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:32.156 13:50:24 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:32.156 13:50:24 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:32.156 13:50:24 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/65163 ]] 00:09:32.156 13:50:24 nvme -- common/autotest_common.sh@1094 -- # kill 65163 00:09:32.156 13:50:24 nvme -- common/autotest_common.sh@1095 -- # wait 65163 00:09:32.156 [2024-12-11 13:50:24.975981] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.976119] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.976200] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.976254] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.982271] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.982341] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.982371] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.982403] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.986894] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.987172] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.987210] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.987241] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.991216] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.991270] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.991290] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 [2024-12-11 13:50:24.991312] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66097) is not found. Dropping the request. 00:09:32.156 13:50:25 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:32.156 13:50:25 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:32.156 13:50:25 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:32.156 13:50:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.156 13:50:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.156 13:50:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.156 ************************************ 00:09:32.156 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:32.156 ************************************ 00:09:32.156 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:32.415 * Looking for test storage... 00:09:32.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.415 --rc genhtml_branch_coverage=1 00:09:32.415 --rc genhtml_function_coverage=1 00:09:32.415 --rc genhtml_legend=1 00:09:32.415 --rc geninfo_all_blocks=1 00:09:32.415 --rc geninfo_unexecuted_blocks=1 00:09:32.415 00:09:32.415 ' 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.415 --rc genhtml_branch_coverage=1 00:09:32.415 --rc genhtml_function_coverage=1 00:09:32.415 --rc genhtml_legend=1 00:09:32.415 --rc geninfo_all_blocks=1 00:09:32.415 --rc geninfo_unexecuted_blocks=1 00:09:32.415 00:09:32.415 ' 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.415 --rc genhtml_branch_coverage=1 00:09:32.415 --rc genhtml_function_coverage=1 00:09:32.415 --rc genhtml_legend=1 00:09:32.415 --rc geninfo_all_blocks=1 00:09:32.415 --rc geninfo_unexecuted_blocks=1 00:09:32.415 00:09:32.415 ' 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.415 --rc genhtml_branch_coverage=1 00:09:32.415 --rc genhtml_function_coverage=1 00:09:32.415 --rc genhtml_legend=1 00:09:32.415 --rc geninfo_all_blocks=1 00:09:32.415 --rc geninfo_unexecuted_blocks=1 00:09:32.415 00:09:32.415 ' 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:32.415 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66399 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66399 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 66399 ']' 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.675 13:50:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:32.675 [2024-12-11 13:50:25.638230] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:32.675 [2024-12-11 13:50:25.638571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66399 ] 00:09:32.940 [2024-12-11 13:50:25.838274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.940 [2024-12-11 13:50:25.953563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.940 [2024-12-11 13:50:25.953756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.940 [2024-12-11 13:50:25.953925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.940 [2024-12-11 13:50:25.953952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:33.876 nvme0n1 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_fmbEW.txt 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:33.876 true 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733925026 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66422 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:33.876 13:50:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:36.411 [2024-12-11 13:50:28.910580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:36.411 [2024-12-11 13:50:28.911059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:36.411 [2024-12-11 13:50:28.911183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:36.411 [2024-12-11 13:50:28.911291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:36.411 [2024-12-11 13:50:28.913165] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66422 00:09:36.411 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66422 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66422 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:36.411 13:50:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_fmbEW.txt 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_fmbEW.txt 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66399 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 66399 ']' 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 66399 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66399 00:09:36.411 killing process with pid 66399 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66399' 00:09:36.411 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 66399 00:09:36.412 13:50:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 66399 00:09:38.957 13:50:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:38.957 13:50:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:38.957 00:09:38.957 real 0m6.327s 00:09:38.957 user 0m22.021s 00:09:38.957 sys 0m0.805s 00:09:38.957 ************************************ 00:09:38.957 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:38.957 ************************************ 00:09:38.957 13:50:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.957 13:50:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:38.957 13:50:31 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:38.957 13:50:31 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:38.957 13:50:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.957 13:50:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.957 13:50:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.957 ************************************ 00:09:38.957 START TEST nvme_fio 00:09:38.957 ************************************ 00:09:38.957 13:50:31 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:38.957 13:50:31 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:38.957 13:50:31 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:38.957 13:50:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:38.957 13:50:31 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:38.957 13:50:31 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:38.957 13:50:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.957 13:50:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.957 13:50:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:38.957 13:50:31 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:38.957 13:50:31 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:38.958 13:50:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:38.958 13:50:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:38.958 13:50:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:38.958 13:50:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:38.958 13:50:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:38.958 13:50:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:38.958 13:50:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:39.231 13:50:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:39.231 13:50:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:39.231 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:39.231 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:39.231 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:39.231 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:39.231 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.231 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:39.231 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:39.231 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:39.488 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.489 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:39.489 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:39.489 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:39.489 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:39.489 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:39.489 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:39.489 13:50:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:39.746 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:39.746 fio-3.35 00:09:39.746 Starting 1 thread 00:09:43.942 00:09:43.942 test: (groupid=0, jobs=1): err= 0: pid=66584: Wed Dec 11 13:50:36 2024 00:09:43.942 read: IOPS=23.0k, BW=89.9MiB/s (94.3MB/s)(180MiB/2001msec) 00:09:43.942 slat (nsec): min=3899, max=60349, avg=4371.38, stdev=939.87 00:09:43.942 clat (usec): min=184, max=11266, avg=2772.21, stdev=260.67 00:09:43.942 lat (usec): min=188, max=11327, avg=2776.58, stdev=261.07 00:09:43.942 clat percentiles (usec): 00:09:43.942 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 00:09:43.942 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2769], 00:09:43.942 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2868], 95.00th=[ 2933], 00:09:43.942 | 99.00th=[ 3294], 99.50th=[ 3916], 99.90th=[ 5997], 99.95th=[ 8717], 00:09:43.942 | 99.99th=[10945] 00:09:43.942 bw ( KiB/s): min=88928, max=92736, per=99.01%, avg=91162.67, stdev=1988.27, samples=3 00:09:43.942 iops : min=22232, max=23184, avg=22790.67, stdev=497.07, samples=3 00:09:43.942 write: IOPS=22.9k, BW=89.4MiB/s (93.7MB/s)(179MiB/2001msec); 0 zone resets 00:09:43.942 slat (nsec): min=4026, max=40683, avg=4637.46, stdev=936.82 00:09:43.942 clat (usec): min=209, max=10986, avg=2780.25, stdev=269.86 00:09:43.942 lat (usec): min=214, max=11006, avg=2784.89, stdev=270.21 00:09:43.942 clat percentiles (usec): 00:09:43.942 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:09:43.942 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2769], 00:09:43.942 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 2933], 00:09:43.942 | 99.00th=[ 3359], 99.50th=[ 3949], 99.90th=[ 6652], 99.95th=[ 9241], 00:09:43.942 | 99.99th=[10814] 00:09:43.942 bw ( KiB/s): min=88296, max=94176, per=99.79%, avg=91336.00, stdev=2945.10, samples=3 00:09:43.942 iops : min=22074, max=23544, avg=22834.00, stdev=736.27, samples=3 00:09:43.942 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:09:43.942 lat (msec) : 2=0.05%, 4=99.44%, 10=0.43%, 20=0.03% 00:09:43.942 cpu : usr=99.40%, sys=0.10%, ctx=22, majf=0, minf=608 00:09:43.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:43.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.942 issued rwts: total=46058,45786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.942 00:09:43.942 Run status group 0 (all jobs): 00:09:43.942 READ: bw=89.9MiB/s (94.3MB/s), 89.9MiB/s-89.9MiB/s (94.3MB/s-94.3MB/s), io=180MiB (189MB), run=2001-2001msec 00:09:43.942 WRITE: bw=89.4MiB/s (93.7MB/s), 89.4MiB/s-89.4MiB/s (93.7MB/s-93.7MB/s), io=179MiB (188MB), run=2001-2001msec 00:09:43.942 ----------------------------------------------------- 00:09:43.942 Suppressions used: 00:09:43.942 count bytes template 00:09:43.942 1 32 /usr/src/fio/parse.c 00:09:43.942 1 8 libtcmalloc_minimal.so 00:09:43.942 ----------------------------------------------------- 00:09:43.942 00:09:43.942 13:50:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:43.942 13:50:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:43.942 13:50:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:43.942 13:50:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:43.942 13:50:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:43.942 13:50:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:43.942 13:50:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:43.942 13:50:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:43.942 13:50:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:44.201 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:44.201 fio-3.35 00:09:44.201 Starting 1 thread 00:09:48.401 00:09:48.401 test: (groupid=0, jobs=1): err= 0: pid=66646: Wed Dec 11 13:50:40 2024 00:09:48.401 read: IOPS=22.7k, BW=88.8MiB/s (93.1MB/s)(178MiB/2001msec) 00:09:48.401 slat (nsec): min=3841, max=62073, avg=4475.94, stdev=1073.22 00:09:48.402 clat (usec): min=213, max=10201, avg=2809.12, stdev=236.02 00:09:48.402 lat (usec): min=217, max=10263, avg=2813.60, stdev=236.43 00:09:48.402 clat percentiles (usec): 00:09:48.402 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:09:48.402 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:09:48.402 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2900], 95.00th=[ 2933], 00:09:48.402 | 99.00th=[ 3228], 99.50th=[ 3851], 99.90th=[ 5866], 99.95th=[ 7767], 00:09:48.402 | 99.99th=[ 9896] 00:09:48.402 bw ( KiB/s): min=88624, max=91456, per=99.53%, avg=90480.00, stdev=1608.06, samples=3 00:09:48.402 iops : min=22156, max=22864, avg=22620.00, stdev=402.01, samples=3 00:09:48.402 write: IOPS=22.6k, BW=88.2MiB/s (92.5MB/s)(177MiB/2001msec); 0 zone resets 00:09:48.402 slat (nsec): min=3883, max=44900, avg=4694.37, stdev=1074.20 00:09:48.402 clat (usec): min=188, max=10058, avg=2814.60, stdev=238.71 00:09:48.402 lat (usec): min=193, max=10080, avg=2819.29, stdev=239.07 00:09:48.402 clat percentiles (usec): 00:09:48.402 | 1.00th=[ 2573], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:09:48.402 | 30.00th=[ 2769], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:09:48.402 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2900], 95.00th=[ 2966], 00:09:48.402 | 99.00th=[ 3261], 99.50th=[ 3851], 99.90th=[ 6194], 99.95th=[ 8029], 00:09:48.402 | 99.99th=[ 9634] 00:09:48.402 bw ( KiB/s): min=88024, max=93072, per=100.00%, avg=90720.00, stdev=2541.52, samples=3 00:09:48.402 iops : min=22006, max=23268, avg=22680.00, stdev=635.38, samples=3 00:09:48.402 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:48.402 lat (msec) : 2=0.10%, 4=99.46%, 10=0.40%, 20=0.01% 00:09:48.402 cpu : usr=99.25%, sys=0.15%, ctx=2, majf=0, minf=608 00:09:48.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:48.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.402 issued rwts: total=45475,45198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.402 00:09:48.402 Run status group 0 (all jobs): 00:09:48.402 READ: bw=88.8MiB/s (93.1MB/s), 88.8MiB/s-88.8MiB/s (93.1MB/s-93.1MB/s), io=178MiB (186MB), run=2001-2001msec 00:09:48.402 WRITE: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=177MiB (185MB), run=2001-2001msec 00:09:48.402 ----------------------------------------------------- 00:09:48.402 Suppressions used: 00:09:48.402 count bytes template 00:09:48.402 1 32 /usr/src/fio/parse.c 00:09:48.402 1 8 libtcmalloc_minimal.so 00:09:48.402 ----------------------------------------------------- 00:09:48.402 00:09:48.402 13:50:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:48.402 13:50:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:48.402 13:50:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:48.402 13:50:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:48.402 13:50:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:48.402 13:50:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:48.661 13:50:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:48.661 13:50:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:48.661 13:50:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:48.919 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:48.919 fio-3.35 00:09:48.919 Starting 1 thread 00:09:53.109 00:09:53.109 test: (groupid=0, jobs=1): err= 0: pid=66707: Wed Dec 11 13:50:45 2024 00:09:53.109 read: IOPS=22.4k, BW=87.5MiB/s (91.7MB/s)(175MiB/2001msec) 00:09:53.109 slat (nsec): min=3732, max=71935, avg=4636.86, stdev=1080.91 00:09:53.109 clat (usec): min=188, max=11735, avg=2852.25, stdev=285.66 00:09:53.109 lat (usec): min=193, max=11807, avg=2856.89, stdev=286.09 00:09:53.109 clat percentiles (usec): 00:09:53.109 | 1.00th=[ 2606], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:09:53.109 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:09:53.109 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 2999], 00:09:53.109 | 99.00th=[ 3228], 99.50th=[ 4113], 99.90th=[ 6587], 99.95th=[ 9241], 00:09:53.109 | 99.99th=[11469] 00:09:53.109 bw ( KiB/s): min=87800, max=90072, per=99.63%, avg=89229.33, stdev=1244.44, samples=3 00:09:53.109 iops : min=21950, max=22518, avg=22307.33, stdev=311.11, samples=3 00:09:53.109 write: IOPS=22.2k, BW=86.9MiB/s (91.1MB/s)(174MiB/2001msec); 0 zone resets 00:09:53.109 slat (nsec): min=3989, max=54184, avg=4832.03, stdev=1111.56 00:09:53.109 clat (usec): min=321, max=11526, avg=2857.75, stdev=290.94 00:09:53.109 lat (usec): min=326, max=11547, avg=2862.58, stdev=291.33 00:09:53.109 clat percentiles (usec): 00:09:53.109 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:09:53.109 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:09:53.109 | 70.00th=[ 2900], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 2999], 00:09:53.109 | 99.00th=[ 3228], 99.50th=[ 4113], 99.90th=[ 7439], 99.95th=[ 9503], 00:09:53.109 | 99.99th=[11207] 00:09:53.109 bw ( KiB/s): min=87368, max=91128, per=100.00%, avg=89408.00, stdev=1900.32, samples=3 00:09:53.109 iops : min=21842, max=22782, avg=22352.00, stdev=475.08, samples=3 00:09:53.109 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:53.109 lat (msec) : 2=0.10%, 4=99.29%, 10=0.54%, 20=0.04% 00:09:53.109 cpu : usr=99.30%, sys=0.15%, ctx=3, majf=0, minf=609 00:09:53.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:53.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.109 issued rwts: total=44801,44499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.109 00:09:53.109 Run status group 0 (all jobs): 00:09:53.109 READ: bw=87.5MiB/s (91.7MB/s), 87.5MiB/s-87.5MiB/s (91.7MB/s-91.7MB/s), io=175MiB (184MB), run=2001-2001msec 00:09:53.109 WRITE: bw=86.9MiB/s (91.1MB/s), 86.9MiB/s-86.9MiB/s (91.1MB/s-91.1MB/s), io=174MiB (182MB), run=2001-2001msec 00:09:53.109 ----------------------------------------------------- 00:09:53.109 Suppressions used: 00:09:53.109 count bytes template 00:09:53.109 1 32 /usr/src/fio/parse.c 00:09:53.109 1 8 libtcmalloc_minimal.so 00:09:53.109 ----------------------------------------------------- 00:09:53.109 00:09:53.109 13:50:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:53.109 13:50:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:53.109 13:50:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:53.109 13:50:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:53.109 13:50:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:53.109 13:50:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:53.368 13:50:46 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:53.368 13:50:46 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:53.368 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:53.627 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:53.627 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:53.627 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:53.627 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:53.627 13:50:46 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:53.627 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:53.627 fio-3.35 00:09:53.627 Starting 1 thread 00:09:58.950 00:09:58.950 test: (groupid=0, jobs=1): err= 0: pid=66773: Wed Dec 11 13:50:51 2024 00:09:58.950 read: IOPS=22.9k, BW=89.4MiB/s (93.7MB/s)(179MiB/2001msec) 00:09:58.950 slat (nsec): min=3706, max=80958, avg=4501.40, stdev=1005.48 00:09:58.950 clat (usec): min=204, max=10656, avg=2789.80, stdev=276.57 00:09:58.950 lat (usec): min=209, max=10737, avg=2794.30, stdev=276.91 00:09:58.950 clat percentiles (usec): 00:09:58.950 | 1.00th=[ 2180], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:09:58.950 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2769], 60.00th=[ 2802], 00:09:58.950 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2900], 95.00th=[ 2966], 00:09:58.951 | 99.00th=[ 3621], 99.50th=[ 4228], 99.90th=[ 5735], 99.95th=[ 8029], 00:09:58.951 | 99.99th=[10421] 00:09:58.951 bw ( KiB/s): min=88608, max=92672, per=99.37%, avg=90952.00, stdev=2102.63, samples=3 00:09:58.951 iops : min=22152, max=23168, avg=22738.00, stdev=525.66, samples=3 00:09:58.951 write: IOPS=22.7k, BW=88.8MiB/s (93.2MB/s)(178MiB/2001msec); 0 zone resets 00:09:58.951 slat (nsec): min=3805, max=80261, avg=4719.43, stdev=1104.50 00:09:58.951 clat (usec): min=230, max=10451, avg=2794.93, stdev=284.71 00:09:58.951 lat (usec): min=235, max=10474, avg=2799.65, stdev=285.00 00:09:58.951 clat percentiles (usec): 00:09:58.951 | 1.00th=[ 2180], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:09:58.951 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2769], 60.00th=[ 2802], 00:09:58.951 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2900], 95.00th=[ 2966], 00:09:58.951 | 99.00th=[ 3687], 99.50th=[ 4228], 99.90th=[ 6259], 99.95th=[ 8291], 00:09:58.951 | 99.99th=[10159] 00:09:58.951 bw ( KiB/s): min=88008, max=93208, per=100.00%, avg=91149.33, stdev=2763.90, samples=3 00:09:58.951 iops : min=22002, max=23302, avg=22787.33, stdev=690.97, samples=3 00:09:58.951 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:58.951 lat (msec) : 2=0.56%, 4=98.72%, 10=0.67%, 20=0.01% 00:09:58.951 cpu : usr=99.35%, sys=0.10%, ctx=3, majf=0, minf=607 00:09:58.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:58.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.951 issued rwts: total=45788,45509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.951 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.951 00:09:58.951 Run status group 0 (all jobs): 00:09:58.951 READ: bw=89.4MiB/s (93.7MB/s), 89.4MiB/s-89.4MiB/s (93.7MB/s-93.7MB/s), io=179MiB (188MB), run=2001-2001msec 00:09:58.951 WRITE: bw=88.8MiB/s (93.2MB/s), 88.8MiB/s-88.8MiB/s (93.2MB/s-93.2MB/s), io=178MiB (186MB), run=2001-2001msec 00:09:58.951 ----------------------------------------------------- 00:09:58.951 Suppressions used: 00:09:58.951 count bytes template 00:09:58.951 1 32 /usr/src/fio/parse.c 00:09:58.951 1 8 libtcmalloc_minimal.so 00:09:58.951 ----------------------------------------------------- 00:09:58.951 00:09:58.951 13:50:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:58.951 13:50:51 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:58.951 00:09:58.951 real 0m20.182s 00:09:58.951 user 0m15.915s 00:09:58.951 sys 0m3.939s 00:09:58.951 13:50:51 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.951 13:50:51 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:58.951 ************************************ 00:09:58.951 END TEST nvme_fio 00:09:58.951 ************************************ 00:09:58.951 ************************************ 00:09:58.951 END TEST nvme 00:09:58.951 ************************************ 00:09:58.951 00:09:58.951 real 1m35.275s 00:09:58.951 user 3m44.089s 00:09:58.951 sys 0m23.082s 00:09:58.951 13:50:51 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.951 13:50:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:58.951 13:50:51 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:58.951 13:50:51 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:58.951 13:50:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.951 13:50:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.951 13:50:51 -- common/autotest_common.sh@10 -- # set +x 00:09:58.951 ************************************ 00:09:58.951 START TEST nvme_scc 00:09:58.951 ************************************ 00:09:58.951 13:50:51 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:59.210 * Looking for test storage... 00:09:59.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:59.210 13:50:52 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:59.210 13:50:52 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:59.210 13:50:52 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:59.210 13:50:52 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.210 13:50:52 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:59.211 13:50:52 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.211 13:50:52 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.211 13:50:52 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.211 13:50:52 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:59.211 13:50:52 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.211 13:50:52 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:59.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.211 --rc genhtml_branch_coverage=1 00:09:59.211 --rc genhtml_function_coverage=1 00:09:59.211 --rc genhtml_legend=1 00:09:59.211 --rc geninfo_all_blocks=1 00:09:59.211 --rc geninfo_unexecuted_blocks=1 00:09:59.211 00:09:59.211 ' 00:09:59.211 13:50:52 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:59.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.211 --rc genhtml_branch_coverage=1 00:09:59.211 --rc genhtml_function_coverage=1 00:09:59.211 --rc genhtml_legend=1 00:09:59.211 --rc geninfo_all_blocks=1 00:09:59.211 --rc geninfo_unexecuted_blocks=1 00:09:59.211 00:09:59.211 ' 00:09:59.211 13:50:52 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:59.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.211 --rc genhtml_branch_coverage=1 00:09:59.211 --rc genhtml_function_coverage=1 00:09:59.211 --rc genhtml_legend=1 00:09:59.211 --rc geninfo_all_blocks=1 00:09:59.211 --rc geninfo_unexecuted_blocks=1 00:09:59.211 00:09:59.211 ' 00:09:59.211 13:50:52 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:59.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.211 --rc genhtml_branch_coverage=1 00:09:59.211 --rc genhtml_function_coverage=1 00:09:59.211 --rc genhtml_legend=1 00:09:59.211 --rc geninfo_all_blocks=1 00:09:59.211 --rc geninfo_unexecuted_blocks=1 00:09:59.211 00:09:59.211 ' 00:09:59.211 13:50:52 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.211 13:50:52 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.211 13:50:52 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.211 13:50:52 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.211 13:50:52 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.211 13:50:52 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.211 13:50:52 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.211 13:50:52 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.211 13:50:52 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:59.211 13:50:52 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:59.211 13:50:52 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:59.211 13:50:52 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.211 13:50:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:59.211 13:50:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:59.211 13:50:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:59.211 13:50:52 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:59.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:00.038 Waiting for block devices as requested 00:10:00.038 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.297 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.297 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.556 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.865 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:05.865 13:50:58 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:05.865 13:50:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:05.865 13:50:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:05.865 13:50:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.865 13:50:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.865 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:05.866 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.867 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.868 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.869 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.870 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:05.871 13:50:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:05.871 13:50:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:05.871 13:50:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.871 13:50:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:05.871 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.872 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.873 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.874 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.875 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.876 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:05.877 13:50:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:05.878 13:50:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:05.878 13:50:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:05.878 13:50:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.878 13:50:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.878 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.879 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.880 13:50:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:05.881 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.882 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:05.883 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.884 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.885 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.886 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.887 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.888 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.149 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:06.150 13:50:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:06.150 13:50:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:06.150 13:50:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:06.150 13:50:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:06.150 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:06.151 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.152 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:06.153 13:50:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:06.153 13:50:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:06.154 13:50:58 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:06.154 13:50:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:06.154 13:50:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:06.154 13:50:58 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:06.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:07.285 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:07.285 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:07.285 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:07.285 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:07.543 13:51:00 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:07.543 13:51:00 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:07.543 13:51:00 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.543 13:51:00 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:07.543 ************************************ 00:10:07.543 START TEST nvme_simple_copy 00:10:07.543 ************************************ 00:10:07.543 13:51:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:07.800 Initializing NVMe Controllers 00:10:07.801 Attaching to 0000:00:10.0 00:10:07.801 Controller supports SCC. Attached to 0000:00:10.0 00:10:07.801 Namespace ID: 1 size: 6GB 00:10:07.801 Initialization complete. 00:10:07.801 00:10:07.801 Controller QEMU NVMe Ctrl (12340 ) 00:10:07.801 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:07.801 Namespace Block Size:4096 00:10:07.801 Writing LBAs 0 to 63 with Random Data 00:10:07.801 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:07.801 LBAs matching Written Data: 64 00:10:07.801 00:10:07.801 real 0m0.307s 00:10:07.801 user 0m0.110s 00:10:07.801 sys 0m0.095s 00:10:07.801 13:51:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.801 ************************************ 00:10:07.801 END TEST nvme_simple_copy 00:10:07.801 ************************************ 00:10:07.801 13:51:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:07.801 00:10:07.801 real 0m8.867s 00:10:07.801 user 0m1.551s 00:10:07.801 sys 0m2.356s 00:10:07.801 13:51:00 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.801 13:51:00 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:07.801 ************************************ 00:10:07.801 END TEST nvme_scc 00:10:07.801 ************************************ 00:10:07.801 13:51:00 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:07.801 13:51:00 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:07.801 13:51:00 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:07.801 13:51:00 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:07.801 13:51:00 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:07.801 13:51:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.801 13:51:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.801 13:51:00 -- common/autotest_common.sh@10 -- # set +x 00:10:07.801 ************************************ 00:10:07.801 START TEST nvme_fdp 00:10:07.801 ************************************ 00:10:07.801 13:51:00 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:08.061 * Looking for test storage... 00:10:08.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:08.061 13:51:00 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:08.061 13:51:00 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:08.061 13:51:00 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:08.061 13:51:01 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:08.061 13:51:01 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.062 13:51:01 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:08.062 13:51:01 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.062 13:51:01 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.062 13:51:01 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.062 13:51:01 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:08.062 13:51:01 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.062 13:51:01 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:08.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.062 --rc genhtml_branch_coverage=1 00:10:08.062 --rc genhtml_function_coverage=1 00:10:08.062 --rc genhtml_legend=1 00:10:08.062 --rc geninfo_all_blocks=1 00:10:08.062 --rc geninfo_unexecuted_blocks=1 00:10:08.062 00:10:08.062 ' 00:10:08.062 13:51:01 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:08.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.062 --rc genhtml_branch_coverage=1 00:10:08.062 --rc genhtml_function_coverage=1 00:10:08.062 --rc genhtml_legend=1 00:10:08.062 --rc geninfo_all_blocks=1 00:10:08.062 --rc geninfo_unexecuted_blocks=1 00:10:08.062 00:10:08.062 ' 00:10:08.062 13:51:01 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:08.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.062 --rc genhtml_branch_coverage=1 00:10:08.062 --rc genhtml_function_coverage=1 00:10:08.062 --rc genhtml_legend=1 00:10:08.062 --rc geninfo_all_blocks=1 00:10:08.062 --rc geninfo_unexecuted_blocks=1 00:10:08.062 00:10:08.062 ' 00:10:08.062 13:51:01 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:08.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.062 --rc genhtml_branch_coverage=1 00:10:08.062 --rc genhtml_function_coverage=1 00:10:08.062 --rc genhtml_legend=1 00:10:08.062 --rc geninfo_all_blocks=1 00:10:08.062 --rc geninfo_unexecuted_blocks=1 00:10:08.062 00:10:08.062 ' 00:10:08.062 13:51:01 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:08.062 13:51:01 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.062 13:51:01 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.062 13:51:01 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.062 13:51:01 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.062 13:51:01 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.062 13:51:01 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.062 13:51:01 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.062 13:51:01 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:08.062 13:51:01 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:08.062 13:51:01 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:08.062 13:51:01 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:08.062 13:51:01 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:08.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:08.888 Waiting for block devices as requested 00:10:08.888 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:08.888 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.145 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.145 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.417 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:14.417 13:51:07 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:14.417 13:51:07 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:14.417 13:51:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:14.417 13:51:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:14.417 13:51:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:14.418 13:51:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:14.418 13:51:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:14.418 13:51:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:14.418 13:51:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:14.418 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:14.419 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:14.420 13:51:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.421 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.422 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.423 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:14.424 13:51:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:14.424 13:51:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:14.424 13:51:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:14.424 13:51:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:14.424 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:14.425 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:14.426 13:51:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:14.427 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:14.428 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.429 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:14.430 13:51:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:14.430 13:51:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:14.430 13:51:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:14.430 13:51:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.430 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:14.431 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.432 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:14.697 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.698 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:14.699 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:14.700 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:14.701 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:14.702 13:51:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:14.703 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.704 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.705 13:51:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:14.706 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:14.707 13:51:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:14.707 13:51:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:14.707 13:51:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:14.707 13:51:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:14.707 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.708 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.709 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:14.969 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:14.970 13:51:07 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:14.970 13:51:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:14.971 13:51:07 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:14.971 13:51:07 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:14.971 13:51:07 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:14.971 13:51:07 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:15.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:16.593 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.593 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.593 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.593 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.593 13:51:09 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:16.593 13:51:09 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:16.593 13:51:09 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.593 13:51:09 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:16.593 ************************************ 00:10:16.593 START TEST nvme_flexible_data_placement 00:10:16.593 ************************************ 00:10:16.593 13:51:09 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:16.871 Initializing NVMe Controllers 00:10:16.871 Attaching to 0000:00:13.0 00:10:16.871 Controller supports FDP Attached to 0000:00:13.0 00:10:16.871 Namespace ID: 1 Endurance Group ID: 1 00:10:16.871 Initialization complete. 00:10:16.871 00:10:16.871 ================================== 00:10:16.871 == FDP tests for Namespace: #01 == 00:10:16.871 ================================== 00:10:16.871 00:10:16.871 Get Feature: FDP: 00:10:16.871 ================= 00:10:16.871 Enabled: Yes 00:10:16.871 FDP configuration Index: 0 00:10:16.871 00:10:16.871 FDP configurations log page 00:10:16.871 =========================== 00:10:16.871 Number of FDP configurations: 1 00:10:16.871 Version: 0 00:10:16.871 Size: 112 00:10:16.871 FDP Configuration Descriptor: 0 00:10:16.871 Descriptor Size: 96 00:10:16.871 Reclaim Group Identifier format: 2 00:10:16.871 FDP Volatile Write Cache: Not Present 00:10:16.871 FDP Configuration: Valid 00:10:16.871 Vendor Specific Size: 0 00:10:16.871 Number of Reclaim Groups: 2 00:10:16.871 Number of Recalim Unit Handles: 8 00:10:16.871 Max Placement Identifiers: 128 00:10:16.871 Number of Namespaces Suppprted: 256 00:10:16.871 Reclaim unit Nominal Size: 6000000 bytes 00:10:16.871 Estimated Reclaim Unit Time Limit: Not Reported 00:10:16.871 RUH Desc #000: RUH Type: Initially Isolated 00:10:16.871 RUH Desc #001: RUH Type: Initially Isolated 00:10:16.871 RUH Desc #002: RUH Type: Initially Isolated 00:10:16.871 RUH Desc #003: RUH Type: Initially Isolated 00:10:16.871 RUH Desc #004: RUH Type: Initially Isolated 00:10:16.871 RUH Desc #005: RUH Type: Initially Isolated 00:10:16.871 RUH Desc #006: RUH Type: Initially Isolated 00:10:16.871 RUH Desc #007: RUH Type: Initially Isolated 00:10:16.871 00:10:16.871 FDP reclaim unit handle usage log page 00:10:16.871 ====================================== 00:10:16.871 Number of Reclaim Unit Handles: 8 00:10:16.871 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:16.871 RUH Usage Desc #001: RUH Attributes: Unused 00:10:16.871 RUH Usage Desc #002: RUH Attributes: Unused 00:10:16.871 RUH Usage Desc #003: RUH Attributes: Unused 00:10:16.871 RUH Usage Desc #004: RUH Attributes: Unused 00:10:16.871 RUH Usage Desc #005: RUH Attributes: Unused 00:10:16.871 RUH Usage Desc #006: RUH Attributes: Unused 00:10:16.871 RUH Usage Desc #007: RUH Attributes: Unused 00:10:16.871 00:10:16.871 FDP statistics log page 00:10:16.871 ======================= 00:10:16.871 Host bytes with metadata written: 994508800 00:10:16.871 Media bytes with metadata written: 994754560 00:10:16.871 Media bytes erased: 0 00:10:16.871 00:10:16.871 FDP Reclaim unit handle status 00:10:16.871 ============================== 00:10:16.871 Number of RUHS descriptors: 2 00:10:16.871 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000b90 00:10:16.871 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:16.871 00:10:16.871 FDP write on placement id: 0 success 00:10:16.871 00:10:16.871 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:16.871 00:10:16.871 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:16.871 00:10:16.871 Get Feature: FDP Events for Placement handle: #0 00:10:16.871 ======================== 00:10:16.871 Number of FDP Events: 6 00:10:16.871 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:16.871 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:16.871 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:16.871 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:16.871 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:16.871 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:16.871 00:10:16.871 FDP events log page 00:10:16.871 =================== 00:10:16.871 Number of FDP events: 1 00:10:16.871 FDP Event #0: 00:10:16.871 Event Type: RU Not Written to Capacity 00:10:16.871 Placement Identifier: Valid 00:10:16.871 NSID: Valid 00:10:16.871 Location: Valid 00:10:16.871 Placement Identifier: 0 00:10:16.871 Event Timestamp: 8 00:10:16.871 Namespace Identifier: 1 00:10:16.871 Reclaim Group Identifier: 0 00:10:16.871 Reclaim Unit Handle Identifier: 0 00:10:16.871 00:10:16.871 FDP test passed 00:10:16.871 00:10:16.871 real 0m0.294s 00:10:16.871 user 0m0.086s 00:10:16.871 sys 0m0.106s 00:10:16.871 13:51:09 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.871 ************************************ 00:10:16.871 END TEST nvme_flexible_data_placement 00:10:16.871 ************************************ 00:10:16.871 13:51:09 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:16.871 ************************************ 00:10:16.871 END TEST nvme_fdp 00:10:16.871 ************************************ 00:10:16.871 00:10:16.871 real 0m8.966s 00:10:16.871 user 0m1.581s 00:10:16.871 sys 0m2.346s 00:10:16.871 13:51:09 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.871 13:51:09 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:16.871 13:51:09 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:16.871 13:51:09 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:16.871 13:51:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.871 13:51:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.871 13:51:09 -- common/autotest_common.sh@10 -- # set +x 00:10:16.871 ************************************ 00:10:16.871 START TEST nvme_rpc 00:10:16.871 ************************************ 00:10:16.871 13:51:09 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:17.130 * Looking for test storage... 00:10:17.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.130 13:51:10 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.130 --rc genhtml_branch_coverage=1 00:10:17.130 --rc genhtml_function_coverage=1 00:10:17.130 --rc genhtml_legend=1 00:10:17.130 --rc geninfo_all_blocks=1 00:10:17.130 --rc geninfo_unexecuted_blocks=1 00:10:17.130 00:10:17.130 ' 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.130 --rc genhtml_branch_coverage=1 00:10:17.130 --rc genhtml_function_coverage=1 00:10:17.130 --rc genhtml_legend=1 00:10:17.130 --rc geninfo_all_blocks=1 00:10:17.130 --rc geninfo_unexecuted_blocks=1 00:10:17.130 00:10:17.130 ' 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.130 --rc genhtml_branch_coverage=1 00:10:17.130 --rc genhtml_function_coverage=1 00:10:17.130 --rc genhtml_legend=1 00:10:17.130 --rc geninfo_all_blocks=1 00:10:17.130 --rc geninfo_unexecuted_blocks=1 00:10:17.130 00:10:17.130 ' 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.130 --rc genhtml_branch_coverage=1 00:10:17.130 --rc genhtml_function_coverage=1 00:10:17.130 --rc genhtml_legend=1 00:10:17.130 --rc geninfo_all_blocks=1 00:10:17.130 --rc geninfo_unexecuted_blocks=1 00:10:17.130 00:10:17.130 ' 00:10:17.130 13:51:10 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:17.130 13:51:10 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:17.130 13:51:10 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:17.389 13:51:10 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:17.389 13:51:10 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:17.389 13:51:10 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:17.389 13:51:10 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:17.389 13:51:10 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=68180 00:10:17.389 13:51:10 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:17.389 13:51:10 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:17.389 13:51:10 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 68180 00:10:17.389 13:51:10 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68180 ']' 00:10:17.389 13:51:10 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.389 13:51:10 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.389 13:51:10 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.389 13:51:10 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.389 13:51:10 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.389 [2024-12-11 13:51:10.334964] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:10:17.389 [2024-12-11 13:51:10.335092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68180 ] 00:10:17.648 [2024-12-11 13:51:10.518276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:17.648 [2024-12-11 13:51:10.633339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.648 [2024-12-11 13:51:10.633378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.584 13:51:11 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.584 13:51:11 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:18.584 13:51:11 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:18.842 Nvme0n1 00:10:18.842 13:51:11 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:18.842 13:51:11 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:19.100 request: 00:10:19.100 { 00:10:19.100 "bdev_name": "Nvme0n1", 00:10:19.100 "filename": "non_existing_file", 00:10:19.100 "method": "bdev_nvme_apply_firmware", 00:10:19.100 "req_id": 1 00:10:19.100 } 00:10:19.100 Got JSON-RPC error response 00:10:19.100 response: 00:10:19.100 { 00:10:19.100 "code": -32603, 00:10:19.100 "message": "open file failed." 00:10:19.100 } 00:10:19.100 13:51:11 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:19.100 13:51:11 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:19.100 13:51:11 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:19.358 13:51:12 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:19.358 13:51:12 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 68180 00:10:19.358 13:51:12 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68180 ']' 00:10:19.358 13:51:12 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68180 00:10:19.358 13:51:12 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:19.358 13:51:12 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.358 13:51:12 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68180 00:10:19.358 killing process with pid 68180 00:10:19.358 13:51:12 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.358 13:51:12 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.358 13:51:12 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68180' 00:10:19.358 13:51:12 nvme_rpc -- common/autotest_common.sh@973 -- # kill 68180 00:10:19.358 13:51:12 nvme_rpc -- common/autotest_common.sh@978 -- # wait 68180 00:10:21.889 ************************************ 00:10:21.889 END TEST nvme_rpc 00:10:21.889 ************************************ 00:10:21.889 00:10:21.889 real 0m4.653s 00:10:21.889 user 0m8.493s 00:10:21.889 sys 0m0.760s 00:10:21.889 13:51:14 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.889 13:51:14 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.889 13:51:14 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:21.889 13:51:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:21.889 13:51:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.889 13:51:14 -- common/autotest_common.sh@10 -- # set +x 00:10:21.889 ************************************ 00:10:21.889 START TEST nvme_rpc_timeouts 00:10:21.889 ************************************ 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:21.889 * Looking for test storage... 00:10:21.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.889 13:51:14 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:21.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.889 --rc genhtml_branch_coverage=1 00:10:21.889 --rc genhtml_function_coverage=1 00:10:21.889 --rc genhtml_legend=1 00:10:21.889 --rc geninfo_all_blocks=1 00:10:21.889 --rc geninfo_unexecuted_blocks=1 00:10:21.889 00:10:21.889 ' 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:21.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.889 --rc genhtml_branch_coverage=1 00:10:21.889 --rc genhtml_function_coverage=1 00:10:21.889 --rc genhtml_legend=1 00:10:21.889 --rc geninfo_all_blocks=1 00:10:21.889 --rc geninfo_unexecuted_blocks=1 00:10:21.889 00:10:21.889 ' 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:21.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.889 --rc genhtml_branch_coverage=1 00:10:21.889 --rc genhtml_function_coverage=1 00:10:21.889 --rc genhtml_legend=1 00:10:21.889 --rc geninfo_all_blocks=1 00:10:21.889 --rc geninfo_unexecuted_blocks=1 00:10:21.889 00:10:21.889 ' 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:21.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.889 --rc genhtml_branch_coverage=1 00:10:21.889 --rc genhtml_function_coverage=1 00:10:21.889 --rc genhtml_legend=1 00:10:21.889 --rc geninfo_all_blocks=1 00:10:21.889 --rc geninfo_unexecuted_blocks=1 00:10:21.889 00:10:21.889 ' 00:10:21.889 13:51:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.889 13:51:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68257 00:10:21.889 13:51:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68257 00:10:21.889 13:51:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:21.889 13:51:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=68289 00:10:21.889 13:51:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:21.889 13:51:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 68289 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 68289 ']' 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.889 13:51:14 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:22.149 [2024-12-11 13:51:14.934754] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:10:22.149 [2024-12-11 13:51:14.934897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68289 ] 00:10:22.149 [2024-12-11 13:51:15.114818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:22.411 [2024-12-11 13:51:15.228841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.411 [2024-12-11 13:51:15.228911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.347 Checking default timeout settings: 00:10:23.347 13:51:16 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.347 13:51:16 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:23.347 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:23.347 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:23.347 Making settings changes with rpc: 00:10:23.347 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:23.347 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:23.605 Check default vs. modified settings: 00:10:23.605 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:23.605 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68257 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68257 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:24.173 Setting action_on_timeout is changed as expected. 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68257 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68257 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:24.173 Setting timeout_us is changed as expected. 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68257 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:24.173 13:51:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:24.173 13:51:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68257 00:10:24.173 13:51:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:24.173 13:51:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:24.173 Setting timeout_admin_us is changed as expected. 00:10:24.173 13:51:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:24.173 13:51:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:24.173 13:51:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:24.173 13:51:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:24.173 13:51:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68257 /tmp/settings_modified_68257 00:10:24.173 13:51:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 68289 00:10:24.173 13:51:17 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 68289 ']' 00:10:24.173 13:51:17 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 68289 00:10:24.173 13:51:17 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:24.173 13:51:17 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.173 13:51:17 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68289 00:10:24.173 killing process with pid 68289 00:10:24.173 13:51:17 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.173 13:51:17 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.173 13:51:17 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68289' 00:10:24.173 13:51:17 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 68289 00:10:24.173 13:51:17 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 68289 00:10:26.708 RPC TIMEOUT SETTING TEST PASSED. 00:10:26.708 13:51:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:26.708 ************************************ 00:10:26.708 END TEST nvme_rpc_timeouts 00:10:26.708 ************************************ 00:10:26.708 00:10:26.708 real 0m4.855s 00:10:26.708 user 0m9.149s 00:10:26.708 sys 0m0.778s 00:10:26.708 13:51:19 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.708 13:51:19 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:26.708 13:51:19 -- spdk/autotest.sh@239 -- # uname -s 00:10:26.708 13:51:19 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:26.708 13:51:19 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:26.708 13:51:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:26.708 13:51:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.708 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:10:26.708 ************************************ 00:10:26.708 START TEST sw_hotplug 00:10:26.708 ************************************ 00:10:26.708 13:51:19 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:26.708 * Looking for test storage... 00:10:26.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:26.708 13:51:19 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:26.708 13:51:19 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:10:26.708 13:51:19 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:26.708 13:51:19 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.708 13:51:19 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.968 13:51:19 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:26.968 13:51:19 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.968 13:51:19 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:26.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.968 --rc genhtml_branch_coverage=1 00:10:26.968 --rc genhtml_function_coverage=1 00:10:26.968 --rc genhtml_legend=1 00:10:26.968 --rc geninfo_all_blocks=1 00:10:26.968 --rc geninfo_unexecuted_blocks=1 00:10:26.968 00:10:26.968 ' 00:10:26.968 13:51:19 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:26.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.968 --rc genhtml_branch_coverage=1 00:10:26.968 --rc genhtml_function_coverage=1 00:10:26.968 --rc genhtml_legend=1 00:10:26.968 --rc geninfo_all_blocks=1 00:10:26.968 --rc geninfo_unexecuted_blocks=1 00:10:26.968 00:10:26.968 ' 00:10:26.968 13:51:19 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:26.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.968 --rc genhtml_branch_coverage=1 00:10:26.968 --rc genhtml_function_coverage=1 00:10:26.968 --rc genhtml_legend=1 00:10:26.968 --rc geninfo_all_blocks=1 00:10:26.968 --rc geninfo_unexecuted_blocks=1 00:10:26.968 00:10:26.968 ' 00:10:26.968 13:51:19 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:26.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.968 --rc genhtml_branch_coverage=1 00:10:26.968 --rc genhtml_function_coverage=1 00:10:26.968 --rc genhtml_legend=1 00:10:26.968 --rc geninfo_all_blocks=1 00:10:26.968 --rc geninfo_unexecuted_blocks=1 00:10:26.968 00:10:26.968 ' 00:10:26.968 13:51:19 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:27.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:27.565 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:27.565 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:27.565 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:27.565 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:27.565 13:51:20 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:27.565 13:51:20 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:27.565 13:51:20 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:27.565 13:51:20 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:27.565 13:51:20 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:27.565 13:51:20 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:27.565 13:51:20 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:27.565 13:51:20 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:27.565 13:51:20 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:27.565 13:51:20 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:27.825 13:51:20 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:27.825 13:51:20 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:27.825 13:51:20 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:27.825 13:51:20 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:28.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:28.651 Waiting for block devices as requested 00:10:28.651 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.651 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.909 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.909 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.185 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:34.185 13:51:26 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:34.185 13:51:26 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:34.753 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:34.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.754 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:35.012 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:35.580 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:35.580 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:35.580 13:51:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=69170 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:35.580 13:51:28 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:35.580 13:51:28 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:35.580 13:51:28 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:35.580 13:51:28 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:35.580 13:51:28 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:35.580 13:51:28 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:35.839 Initializing NVMe Controllers 00:10:35.839 Attaching to 0000:00:10.0 00:10:35.839 Attaching to 0000:00:11.0 00:10:35.839 Attached to 0000:00:10.0 00:10:35.839 Attached to 0000:00:11.0 00:10:35.839 Initialization complete. Starting I/O... 00:10:35.839 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:35.839 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:35.839 00:10:37.215 QEMU NVMe Ctrl (12340 ): 1556 I/Os completed (+1556) 00:10:37.215 QEMU NVMe Ctrl (12341 ): 1556 I/Os completed (+1556) 00:10:37.215 00:10:38.151 QEMU NVMe Ctrl (12340 ): 3708 I/Os completed (+2152) 00:10:38.151 QEMU NVMe Ctrl (12341 ): 3710 I/Os completed (+2154) 00:10:38.151 00:10:39.089 QEMU NVMe Ctrl (12340 ): 5920 I/Os completed (+2212) 00:10:39.089 QEMU NVMe Ctrl (12341 ): 5922 I/Os completed (+2212) 00:10:39.089 00:10:40.024 QEMU NVMe Ctrl (12340 ): 8112 I/Os completed (+2192) 00:10:40.024 QEMU NVMe Ctrl (12341 ): 8114 I/Os completed (+2192) 00:10:40.024 00:10:40.960 QEMU NVMe Ctrl (12340 ): 10136 I/Os completed (+2024) 00:10:40.960 QEMU NVMe Ctrl (12341 ): 10138 I/Os completed (+2024) 00:10:40.960 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:41.897 [2024-12-11 13:51:34.627967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:41.897 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:41.897 [2024-12-11 13:51:34.630071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.630253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.630313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.630426] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:41.897 [2024-12-11 13:51:34.633413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.633570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.633627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.633753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:41.897 [2024-12-11 13:51:34.672269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:41.897 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:41.897 [2024-12-11 13:51:34.674169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.674221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.674258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.674281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:41.897 [2024-12-11 13:51:34.676861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.677053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.677087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 [2024-12-11 13:51:34.677109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.897 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:41.897 EAL: Scan for (pci) bus failed. 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:41.897 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.897 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:41.897 Attaching to 0000:00:10.0 00:10:41.897 Attached to 0000:00:10.0 00:10:42.157 13:51:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:42.157 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:42.157 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:42.157 Attaching to 0000:00:11.0 00:10:42.158 Attached to 0000:00:11.0 00:10:43.095 QEMU NVMe Ctrl (12340 ): 2096 I/Os completed (+2096) 00:10:43.095 QEMU NVMe Ctrl (12341 ): 1856 I/Os completed (+1856) 00:10:43.095 00:10:44.031 QEMU NVMe Ctrl (12340 ): 4344 I/Os completed (+2248) 00:10:44.031 QEMU NVMe Ctrl (12341 ): 4108 I/Os completed (+2252) 00:10:44.031 00:10:44.969 QEMU NVMe Ctrl (12340 ): 6544 I/Os completed (+2200) 00:10:44.969 QEMU NVMe Ctrl (12341 ): 6308 I/Os completed (+2200) 00:10:44.969 00:10:45.905 QEMU NVMe Ctrl (12340 ): 8744 I/Os completed (+2200) 00:10:45.905 QEMU NVMe Ctrl (12341 ): 8508 I/Os completed (+2200) 00:10:45.905 00:10:46.842 QEMU NVMe Ctrl (12340 ): 11000 I/Os completed (+2256) 00:10:46.842 QEMU NVMe Ctrl (12341 ): 10764 I/Os completed (+2256) 00:10:46.842 00:10:48.219 QEMU NVMe Ctrl (12340 ): 13236 I/Os completed (+2236) 00:10:48.219 QEMU NVMe Ctrl (12341 ): 13000 I/Os completed (+2236) 00:10:48.219 00:10:49.155 QEMU NVMe Ctrl (12340 ): 15496 I/Os completed (+2260) 00:10:49.155 QEMU NVMe Ctrl (12341 ): 15260 I/Os completed (+2260) 00:10:49.155 00:10:50.093 QEMU NVMe Ctrl (12340 ): 17724 I/Os completed (+2228) 00:10:50.093 QEMU NVMe Ctrl (12341 ): 17488 I/Os completed (+2228) 00:10:50.093 00:10:51.029 QEMU NVMe Ctrl (12340 ): 19960 I/Os completed (+2236) 00:10:51.029 QEMU NVMe Ctrl (12341 ): 19725 I/Os completed (+2237) 00:10:51.029 00:10:51.987 QEMU NVMe Ctrl (12340 ): 22172 I/Os completed (+2212) 00:10:51.987 QEMU NVMe Ctrl (12341 ): 21937 I/Os completed (+2212) 00:10:51.987 00:10:52.930 QEMU NVMe Ctrl (12340 ): 24364 I/Os completed (+2192) 00:10:52.930 QEMU NVMe Ctrl (12341 ): 24129 I/Os completed (+2192) 00:10:52.930 00:10:53.867 QEMU NVMe Ctrl (12340 ): 26612 I/Os completed (+2248) 00:10:53.867 QEMU NVMe Ctrl (12341 ): 26377 I/Os completed (+2248) 00:10:53.867 00:10:54.126 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:54.126 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:54.126 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.126 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.126 [2024-12-11 13:51:47.026621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:54.126 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:54.126 [2024-12-11 13:51:47.028397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.028564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.028621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.028714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:54.126 [2024-12-11 13:51:47.031711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.031863] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.031916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.032013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:54.126 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.126 EAL: Scan for (pci) bus failed. 00:10:54.126 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.126 [2024-12-11 13:51:47.066329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:54.126 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:54.126 [2024-12-11 13:51:47.068019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.068068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.068096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.068117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:54.126 [2024-12-11 13:51:47.070703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.070749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.070770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 [2024-12-11 13:51:47.070789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.126 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:54.126 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:54.126 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.126 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.385 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:54.385 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:54.385 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.385 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.385 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.385 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.385 Attaching to 0000:00:10.0 00:10:54.385 Attached to 0000:00:10.0 00:10:54.385 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.385 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.385 13:51:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:54.385 Attaching to 0000:00:11.0 00:10:54.385 Attached to 0000:00:11.0 00:10:54.953 QEMU NVMe Ctrl (12340 ): 1208 I/Os completed (+1208) 00:10:54.953 QEMU NVMe Ctrl (12341 ): 956 I/Os completed (+956) 00:10:54.953 00:10:55.890 QEMU NVMe Ctrl (12340 ): 3444 I/Os completed (+2236) 00:10:55.890 QEMU NVMe Ctrl (12341 ): 3192 I/Os completed (+2236) 00:10:55.890 00:10:56.827 QEMU NVMe Ctrl (12340 ): 5652 I/Os completed (+2208) 00:10:56.827 QEMU NVMe Ctrl (12341 ): 5401 I/Os completed (+2209) 00:10:56.827 00:10:58.204 QEMU NVMe Ctrl (12340 ): 7876 I/Os completed (+2224) 00:10:58.204 QEMU NVMe Ctrl (12341 ): 7625 I/Os completed (+2224) 00:10:58.204 00:10:59.140 QEMU NVMe Ctrl (12340 ): 10056 I/Os completed (+2180) 00:10:59.140 QEMU NVMe Ctrl (12341 ): 9805 I/Os completed (+2180) 00:10:59.140 00:11:00.077 QEMU NVMe Ctrl (12340 ): 12212 I/Os completed (+2156) 00:11:00.077 QEMU NVMe Ctrl (12341 ): 11984 I/Os completed (+2179) 00:11:00.077 00:11:01.014 QEMU NVMe Ctrl (12340 ): 14332 I/Os completed (+2120) 00:11:01.014 QEMU NVMe Ctrl (12341 ): 14104 I/Os completed (+2120) 00:11:01.014 00:11:01.950 QEMU NVMe Ctrl (12340 ): 16552 I/Os completed (+2220) 00:11:01.950 QEMU NVMe Ctrl (12341 ): 16324 I/Os completed (+2220) 00:11:01.950 00:11:02.884 QEMU NVMe Ctrl (12340 ): 18796 I/Os completed (+2244) 00:11:02.884 QEMU NVMe Ctrl (12341 ): 18568 I/Os completed (+2244) 00:11:02.884 00:11:03.820 QEMU NVMe Ctrl (12340 ): 21024 I/Os completed (+2228) 00:11:03.820 QEMU NVMe Ctrl (12341 ): 20796 I/Os completed (+2228) 00:11:03.820 00:11:05.197 QEMU NVMe Ctrl (12340 ): 23236 I/Os completed (+2212) 00:11:05.197 QEMU NVMe Ctrl (12341 ): 23008 I/Os completed (+2212) 00:11:05.197 00:11:05.765 QEMU NVMe Ctrl (12340 ): 25440 I/Os completed (+2204) 00:11:05.765 QEMU NVMe Ctrl (12341 ): 25212 I/Os completed (+2204) 00:11:05.765 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.702 [2024-12-11 13:51:59.402435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.702 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:06.702 [2024-12-11 13:51:59.404407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.404565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.404617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.404718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:06.702 [2024-12-11 13:51:59.407675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.407813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.407882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.407978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.702 [2024-12-11 13:51:59.443232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:06.702 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:06.702 [2024-12-11 13:51:59.444906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.445047] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.445105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.445217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:06.702 [2024-12-11 13:51:59.447957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.448080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.448138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 [2024-12-11 13:51:59.448227] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.702 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:06.702 Attaching to 0000:00:10.0 00:11:06.702 Attached to 0000:00:10.0 00:11:06.960 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:06.960 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.960 13:51:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:06.960 Attaching to 0000:00:11.0 00:11:06.960 Attached to 0000:00:11.0 00:11:06.960 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:06.960 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:06.960 [2024-12-11 13:51:59.782474] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:19.173 13:52:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:19.173 13:52:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.173 13:52:11 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.16 00:11:19.173 13:52:11 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.16 00:11:19.173 13:52:11 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:19.173 13:52:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.16 00:11:19.173 13:52:11 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.16 2 00:11:19.173 remove_attach_helper took 43.16s to complete (handling 2 nvme drive(s)) 13:52:11 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:25.744 13:52:17 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 69170 00:11:25.744 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (69170) - No such process 00:11:25.744 13:52:17 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 69170 00:11:25.744 13:52:17 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:25.744 13:52:17 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:25.744 13:52:17 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:25.744 13:52:17 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69725 00:11:25.744 13:52:17 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:25.744 13:52:17 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:25.744 13:52:17 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69725 00:11:25.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.744 13:52:17 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69725 ']' 00:11:25.744 13:52:17 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.745 13:52:17 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.745 13:52:17 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.745 13:52:17 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.745 13:52:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.745 [2024-12-11 13:52:17.896061] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:11:25.745 [2024-12-11 13:52:17.896420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69725 ] 00:11:25.745 [2024-12-11 13:52:18.079332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.745 [2024-12-11 13:52:18.197491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.312 13:52:19 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.312 13:52:19 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:26.312 13:52:19 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:26.312 13:52:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.312 13:52:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.312 13:52:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.312 13:52:19 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:26.312 13:52:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:26.312 13:52:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:26.312 13:52:19 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:26.312 13:52:19 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:26.312 13:52:19 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:26.312 13:52:19 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:26.312 13:52:19 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:26.312 13:52:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:26.312 13:52:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:26.312 13:52:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:26.312 13:52:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:26.312 13:52:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.879 13:52:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.879 13:52:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.879 [2024-12-11 13:52:25.163665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:32.879 [2024-12-11 13:52:25.166286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.879 [2024-12-11 13:52:25.166339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.879 [2024-12-11 13:52:25.166358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.879 [2024-12-11 13:52:25.166386] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.879 [2024-12-11 13:52:25.166398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.879 [2024-12-11 13:52:25.166414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.879 [2024-12-11 13:52:25.166428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.879 [2024-12-11 13:52:25.166442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.879 [2024-12-11 13:52:25.166453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.879 [2024-12-11 13:52:25.166471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.879 [2024-12-11 13:52:25.166481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.879 [2024-12-11 13:52:25.166495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.879 13:52:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:32.879 [2024-12-11 13:52:25.563032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:32.879 [2024-12-11 13:52:25.565615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.879 [2024-12-11 13:52:25.565803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.879 [2024-12-11 13:52:25.565850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.879 [2024-12-11 13:52:25.565878] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.879 [2024-12-11 13:52:25.565894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.879 [2024-12-11 13:52:25.565906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.879 [2024-12-11 13:52:25.565921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.879 [2024-12-11 13:52:25.565933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.879 [2024-12-11 13:52:25.565947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.879 [2024-12-11 13:52:25.565960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.879 [2024-12-11 13:52:25.565974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.879 [2024-12-11 13:52:25.565985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.879 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.879 13:52:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.879 13:52:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.880 13:52:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.880 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:32.880 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:32.880 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.880 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.880 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:33.138 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:33.138 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:33.138 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:33.138 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:33.138 13:52:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:33.138 13:52:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:33.138 13:52:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:33.138 13:52:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.345 13:52:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.345 13:52:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.345 13:52:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.345 13:52:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.345 13:52:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.345 [2024-12-11 13:52:38.242584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:45.345 [2024-12-11 13:52:38.245070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.345 [2024-12-11 13:52:38.245116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.345 [2024-12-11 13:52:38.245134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.345 [2024-12-11 13:52:38.245160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.345 [2024-12-11 13:52:38.245172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.345 [2024-12-11 13:52:38.245186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.345 [2024-12-11 13:52:38.245199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.345 [2024-12-11 13:52:38.245212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.345 [2024-12-11 13:52:38.245224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.345 [2024-12-11 13:52:38.245239] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.345 [2024-12-11 13:52:38.245250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.345 [2024-12-11 13:52:38.245264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.345 13:52:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:45.345 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:45.604 [2024-12-11 13:52:38.641954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:45.604 [2024-12-11 13:52:38.644406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.604 [2024-12-11 13:52:38.644567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.604 [2024-12-11 13:52:38.644602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.604 [2024-12-11 13:52:38.644665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.604 [2024-12-11 13:52:38.644685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.604 [2024-12-11 13:52:38.644697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.604 [2024-12-11 13:52:38.644713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.604 [2024-12-11 13:52:38.644725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.604 [2024-12-11 13:52:38.644739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.604 [2024-12-11 13:52:38.644752] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.604 [2024-12-11 13:52:38.644765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.604 [2024-12-11 13:52:38.644777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.863 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:45.863 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.863 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.863 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.863 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.863 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.863 13:52:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.863 13:52:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.863 13:52:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.863 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:45.863 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:46.122 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.122 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.122 13:52:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:46.122 13:52:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:46.122 13:52:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.122 13:52:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.122 13:52:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.122 13:52:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:46.122 13:52:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:46.122 13:52:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.122 13:52:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.355 13:52:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.355 13:52:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.355 13:52:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.355 [2024-12-11 13:52:51.221752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:58.355 [2024-12-11 13:52:51.224725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.355 [2024-12-11 13:52:51.224887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.355 [2024-12-11 13:52:51.225004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.355 [2024-12-11 13:52:51.225073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.355 [2024-12-11 13:52:51.225108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.355 [2024-12-11 13:52:51.225220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.355 [2024-12-11 13:52:51.225276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.355 [2024-12-11 13:52:51.225311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.355 [2024-12-11 13:52:51.225402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.355 [2024-12-11 13:52:51.225501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.355 [2024-12-11 13:52:51.225537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.355 [2024-12-11 13:52:51.225622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.355 13:52:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.355 13:52:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.355 13:52:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:58.355 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:58.924 [2024-12-11 13:52:51.720931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:58.924 [2024-12-11 13:52:51.723194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.924 [2024-12-11 13:52:51.723338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.924 [2024-12-11 13:52:51.723366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.924 [2024-12-11 13:52:51.723388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.924 [2024-12-11 13:52:51.723403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.924 [2024-12-11 13:52:51.723415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.924 [2024-12-11 13:52:51.723431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.924 [2024-12-11 13:52:51.723442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.924 [2024-12-11 13:52:51.723459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.924 [2024-12-11 13:52:51.723472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.924 [2024-12-11 13:52:51.723486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.924 [2024-12-11 13:52:51.723497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.924 13:52:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.924 13:52:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.924 13:52:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.924 13:52:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:59.183 13:52:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:59.183 13:52:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.183 13:52:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.183 13:52:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.183 13:52:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:59.183 13:52:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:59.183 13:52:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.183 13:52:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.16 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.16 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:12:11.392 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:11.392 13:53:04 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:11.392 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:17.956 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:17.957 13:53:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.957 13:53:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.957 [2024-12-11 13:53:10.369169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:17.957 [2024-12-11 13:53:10.371081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.957 [2024-12-11 13:53:10.371238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.957 [2024-12-11 13:53:10.371349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.957 [2024-12-11 13:53:10.371477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.957 [2024-12-11 13:53:10.371496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.957 [2024-12-11 13:53:10.371513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.957 [2024-12-11 13:53:10.371527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.957 [2024-12-11 13:53:10.371541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.957 [2024-12-11 13:53:10.371553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.957 [2024-12-11 13:53:10.371569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.957 [2024-12-11 13:53:10.371580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.957 [2024-12-11 13:53:10.371597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.957 13:53:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:17.957 [2024-12-11 13:53:10.768533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:17.957 [2024-12-11 13:53:10.770978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.957 [2024-12-11 13:53:10.771019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.957 [2024-12-11 13:53:10.771041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.957 [2024-12-11 13:53:10.771064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.957 [2024-12-11 13:53:10.771079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.957 [2024-12-11 13:53:10.771091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.957 [2024-12-11 13:53:10.771107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.957 [2024-12-11 13:53:10.771118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.957 [2024-12-11 13:53:10.771132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.957 [2024-12-11 13:53:10.771145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.957 [2024-12-11 13:53:10.771159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.957 [2024-12-11 13:53:10.771171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:17.957 13:53:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.957 13:53:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.957 13:53:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:17.957 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:18.217 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.217 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.217 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:18.217 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:18.217 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.217 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.217 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.217 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:18.476 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:18.476 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.476 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:30.683 13:53:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.683 13:53:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.683 13:53:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:30.683 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:30.683 13:53:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.684 13:53:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.684 [2024-12-11 13:53:23.448142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:30.684 [2024-12-11 13:53:23.449881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.684 [2024-12-11 13:53:23.450033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.684 [2024-12-11 13:53:23.450157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.684 [2024-12-11 13:53:23.450319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.684 [2024-12-11 13:53:23.450390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.684 [2024-12-11 13:53:23.450459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.684 [2024-12-11 13:53:23.450478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.684 [2024-12-11 13:53:23.450495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.684 [2024-12-11 13:53:23.450507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.684 [2024-12-11 13:53:23.450523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.684 [2024-12-11 13:53:23.450535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.684 [2024-12-11 13:53:23.450549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.684 13:53:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.684 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:30.684 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:30.943 [2024-12-11 13:53:23.847510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:30.943 [2024-12-11 13:53:23.849097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.943 [2024-12-11 13:53:23.849253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.943 [2024-12-11 13:53:23.849283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.943 [2024-12-11 13:53:23.849304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.943 [2024-12-11 13:53:23.849321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.943 [2024-12-11 13:53:23.849334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.943 [2024-12-11 13:53:23.849350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.943 [2024-12-11 13:53:23.849361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.943 [2024-12-11 13:53:23.849375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.943 [2024-12-11 13:53:23.849389] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.943 [2024-12-11 13:53:23.849402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.943 [2024-12-11 13:53:23.849414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.943 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:30.943 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:30.943 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:30.943 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:30.943 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:30.943 13:53:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:30.943 13:53:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.943 13:53:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.943 13:53:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.203 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:31.203 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:31.203 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:31.203 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:31.203 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:31.203 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:31.203 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:31.203 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:31.203 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:31.203 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:31.461 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:31.461 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:31.461 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:43.666 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:43.666 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:43.666 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:43.666 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:43.666 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:43.666 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:43.666 13:53:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.666 13:53:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.666 13:53:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.666 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:43.666 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:43.666 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:43.666 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:43.667 [2024-12-11 13:53:36.427278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:43.667 [2024-12-11 13:53:36.429425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.667 [2024-12-11 13:53:36.429577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.667 [2024-12-11 13:53:36.429717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.667 [2024-12-11 13:53:36.429801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.667 [2024-12-11 13:53:36.429897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.667 [2024-12-11 13:53:36.429961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.667 [2024-12-11 13:53:36.430055] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.667 [2024-12-11 13:53:36.430099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.667 [2024-12-11 13:53:36.430149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.667 [2024-12-11 13:53:36.430248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.667 [2024-12-11 13:53:36.430280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.667 [2024-12-11 13:53:36.430431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.667 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:43.667 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:43.667 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:43.667 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:43.667 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:43.667 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:43.667 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:43.667 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:43.667 13:53:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.667 13:53:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.667 13:53:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.667 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:43.667 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:43.926 [2024-12-11 13:53:36.826643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:43.926 [2024-12-11 13:53:36.828724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.926 [2024-12-11 13:53:36.828767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.926 [2024-12-11 13:53:36.828787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.926 [2024-12-11 13:53:36.828809] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.926 [2024-12-11 13:53:36.828832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.926 [2024-12-11 13:53:36.828844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.926 [2024-12-11 13:53:36.828860] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.926 [2024-12-11 13:53:36.828872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.926 [2024-12-11 13:53:36.828886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.926 [2024-12-11 13:53:36.828899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.926 [2024-12-11 13:53:36.828915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.926 [2024-12-11 13:53:36.828927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:44.185 13:53:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.185 13:53:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.185 13:53:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:44.185 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:44.443 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:44.443 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:44.443 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:44.443 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:44.443 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:44.443 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:44.443 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:44.443 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.16 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.16 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:12:56.693 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:56.693 13:53:49 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69725 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69725 ']' 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69725 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69725 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69725' 00:12:56.693 killing process with pid 69725 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69725 00:12:56.693 13:53:49 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69725 00:12:59.233 13:53:51 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:59.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:00.061 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:00.061 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:00.061 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.320 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.320 00:13:00.320 real 2m33.658s 00:13:00.320 user 1m51.092s 00:13:00.320 sys 0m22.767s 00:13:00.320 ************************************ 00:13:00.320 END TEST sw_hotplug 00:13:00.320 ************************************ 00:13:00.320 13:53:53 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.320 13:53:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.320 13:53:53 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:00.320 13:53:53 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:00.320 13:53:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:00.320 13:53:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.320 13:53:53 -- common/autotest_common.sh@10 -- # set +x 00:13:00.320 ************************************ 00:13:00.320 START TEST nvme_xnvme 00:13:00.320 ************************************ 00:13:00.320 13:53:53 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:00.580 * Looking for test storage... 00:13:00.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:00.580 13:53:53 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:00.580 13:53:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:00.580 13:53:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:00.580 13:53:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.580 13:53:53 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:00.580 13:53:53 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.580 13:53:53 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.580 --rc genhtml_branch_coverage=1 00:13:00.580 --rc genhtml_function_coverage=1 00:13:00.580 --rc genhtml_legend=1 00:13:00.580 --rc geninfo_all_blocks=1 00:13:00.580 --rc geninfo_unexecuted_blocks=1 00:13:00.580 00:13:00.580 ' 00:13:00.580 13:53:53 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.580 --rc genhtml_branch_coverage=1 00:13:00.580 --rc genhtml_function_coverage=1 00:13:00.580 --rc genhtml_legend=1 00:13:00.580 --rc geninfo_all_blocks=1 00:13:00.580 --rc geninfo_unexecuted_blocks=1 00:13:00.580 00:13:00.580 ' 00:13:00.580 13:53:53 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.580 --rc genhtml_branch_coverage=1 00:13:00.580 --rc genhtml_function_coverage=1 00:13:00.580 --rc genhtml_legend=1 00:13:00.580 --rc geninfo_all_blocks=1 00:13:00.580 --rc geninfo_unexecuted_blocks=1 00:13:00.580 00:13:00.580 ' 00:13:00.580 13:53:53 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.581 --rc genhtml_branch_coverage=1 00:13:00.581 --rc genhtml_function_coverage=1 00:13:00.581 --rc genhtml_legend=1 00:13:00.581 --rc geninfo_all_blocks=1 00:13:00.581 --rc geninfo_unexecuted_blocks=1 00:13:00.581 00:13:00.581 ' 00:13:00.581 13:53:53 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:00.581 13:53:53 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:00.581 13:53:53 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:00.581 13:53:53 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:00.581 13:53:53 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:00.581 13:53:53 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:00.581 13:53:53 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:00.581 13:53:53 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:00.581 13:53:53 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:00.581 13:53:53 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:00.581 13:53:53 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:00.581 13:53:53 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:00.581 13:53:53 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:00.581 #define SPDK_CONFIG_H 00:13:00.581 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:00.581 #define SPDK_CONFIG_APPS 1 00:13:00.581 #define SPDK_CONFIG_ARCH native 00:13:00.581 #define SPDK_CONFIG_ASAN 1 00:13:00.581 #undef SPDK_CONFIG_AVAHI 00:13:00.581 #undef SPDK_CONFIG_CET 00:13:00.581 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:00.581 #define SPDK_CONFIG_COVERAGE 1 00:13:00.581 #define SPDK_CONFIG_CROSS_PREFIX 00:13:00.581 #undef SPDK_CONFIG_CRYPTO 00:13:00.581 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:00.581 #undef SPDK_CONFIG_CUSTOMOCF 00:13:00.581 #undef SPDK_CONFIG_DAOS 00:13:00.581 #define SPDK_CONFIG_DAOS_DIR 00:13:00.581 #define SPDK_CONFIG_DEBUG 1 00:13:00.581 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:00.581 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:00.581 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:00.581 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:00.581 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:00.581 #undef SPDK_CONFIG_DPDK_UADK 00:13:00.581 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:00.581 #define SPDK_CONFIG_EXAMPLES 1 00:13:00.581 #undef SPDK_CONFIG_FC 00:13:00.581 #define SPDK_CONFIG_FC_PATH 00:13:00.582 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:00.582 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:00.582 #define SPDK_CONFIG_FSDEV 1 00:13:00.582 #undef SPDK_CONFIG_FUSE 00:13:00.582 #undef SPDK_CONFIG_FUZZER 00:13:00.582 #define SPDK_CONFIG_FUZZER_LIB 00:13:00.582 #undef SPDK_CONFIG_GOLANG 00:13:00.582 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:00.582 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:00.582 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:00.582 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:00.582 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:00.582 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:00.582 #undef SPDK_CONFIG_HAVE_LZ4 00:13:00.582 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:00.582 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:00.582 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:00.582 #define SPDK_CONFIG_IDXD 1 00:13:00.582 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:00.582 #undef SPDK_CONFIG_IPSEC_MB 00:13:00.582 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:00.582 #define SPDK_CONFIG_ISAL 1 00:13:00.582 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:00.582 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:00.582 #define SPDK_CONFIG_LIBDIR 00:13:00.582 #undef SPDK_CONFIG_LTO 00:13:00.582 #define SPDK_CONFIG_MAX_LCORES 128 00:13:00.582 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:00.582 #define SPDK_CONFIG_NVME_CUSE 1 00:13:00.582 #undef SPDK_CONFIG_OCF 00:13:00.582 #define SPDK_CONFIG_OCF_PATH 00:13:00.582 #define SPDK_CONFIG_OPENSSL_PATH 00:13:00.582 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:00.582 #define SPDK_CONFIG_PGO_DIR 00:13:00.582 #undef SPDK_CONFIG_PGO_USE 00:13:00.582 #define SPDK_CONFIG_PREFIX /usr/local 00:13:00.582 #undef SPDK_CONFIG_RAID5F 00:13:00.582 #undef SPDK_CONFIG_RBD 00:13:00.582 #define SPDK_CONFIG_RDMA 1 00:13:00.582 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:00.582 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:00.582 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:00.582 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:00.582 #define SPDK_CONFIG_SHARED 1 00:13:00.582 #undef SPDK_CONFIG_SMA 00:13:00.582 #define SPDK_CONFIG_TESTS 1 00:13:00.582 #undef SPDK_CONFIG_TSAN 00:13:00.582 #define SPDK_CONFIG_UBLK 1 00:13:00.582 #define SPDK_CONFIG_UBSAN 1 00:13:00.582 #undef SPDK_CONFIG_UNIT_TESTS 00:13:00.582 #undef SPDK_CONFIG_URING 00:13:00.582 #define SPDK_CONFIG_URING_PATH 00:13:00.582 #undef SPDK_CONFIG_URING_ZNS 00:13:00.582 #undef SPDK_CONFIG_USDT 00:13:00.582 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:00.582 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:00.582 #undef SPDK_CONFIG_VFIO_USER 00:13:00.582 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:00.582 #define SPDK_CONFIG_VHOST 1 00:13:00.582 #define SPDK_CONFIG_VIRTIO 1 00:13:00.582 #undef SPDK_CONFIG_VTUNE 00:13:00.582 #define SPDK_CONFIG_VTUNE_DIR 00:13:00.582 #define SPDK_CONFIG_WERROR 1 00:13:00.582 #define SPDK_CONFIG_WPDK_DIR 00:13:00.582 #define SPDK_CONFIG_XNVME 1 00:13:00.582 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:00.582 13:53:53 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:00.582 13:53:53 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.582 13:53:53 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.582 13:53:53 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.582 13:53:53 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.582 13:53:53 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.582 13:53:53 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.582 13:53:53 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.582 13:53:53 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:00.582 13:53:53 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:00.582 13:53:53 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:00.582 13:53:53 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:00.583 13:53:53 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 71076 ]] 00:13:00.584 13:53:53 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 71076 00:13:00.842 13:53:53 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:13:00.842 13:53:53 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.KsEowB 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.KsEowB/tests/xnvme /tmp/spdk.KsEowB 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975429120 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592588288 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975429120 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592588288 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266269696 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95233515520 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4469264384 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:00.843 * Looking for test storage... 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975429120 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:00.843 13:53:53 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:00.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:00.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.844 --rc genhtml_branch_coverage=1 00:13:00.844 --rc genhtml_function_coverage=1 00:13:00.844 --rc genhtml_legend=1 00:13:00.844 --rc geninfo_all_blocks=1 00:13:00.844 --rc geninfo_unexecuted_blocks=1 00:13:00.844 00:13:00.844 ' 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:00.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.844 --rc genhtml_branch_coverage=1 00:13:00.844 --rc genhtml_function_coverage=1 00:13:00.844 --rc genhtml_legend=1 00:13:00.844 --rc geninfo_all_blocks=1 00:13:00.844 --rc geninfo_unexecuted_blocks=1 00:13:00.844 00:13:00.844 ' 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:00.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.844 --rc genhtml_branch_coverage=1 00:13:00.844 --rc genhtml_function_coverage=1 00:13:00.844 --rc genhtml_legend=1 00:13:00.844 --rc geninfo_all_blocks=1 00:13:00.844 --rc geninfo_unexecuted_blocks=1 00:13:00.844 00:13:00.844 ' 00:13:00.844 13:53:53 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:00.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.844 --rc genhtml_branch_coverage=1 00:13:00.844 --rc genhtml_function_coverage=1 00:13:00.844 --rc genhtml_legend=1 00:13:00.844 --rc geninfo_all_blocks=1 00:13:00.844 --rc geninfo_unexecuted_blocks=1 00:13:00.844 00:13:00.844 ' 00:13:00.844 13:53:53 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.844 13:53:53 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.844 13:53:53 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.844 13:53:53 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.844 13:53:53 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.845 13:53:53 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:00.845 13:53:53 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:00.845 13:53:53 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:01.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:01.669 Waiting for block devices as requested 00:13:01.669 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:01.927 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:01.927 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:01.927 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:07.197 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:07.197 13:54:00 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:07.455 13:54:00 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:07.713 13:54:00 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:07.713 13:54:00 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:07.713 13:54:00 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:07.713 13:54:00 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:07.713 13:54:00 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:07.713 13:54:00 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:07.973 No valid GPT data, bailing 00:13:07.973 13:54:00 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:07.973 13:54:00 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:07.973 13:54:00 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:07.973 13:54:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:07.973 13:54:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.973 13:54:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.973 13:54:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.973 ************************************ 00:13:07.973 START TEST xnvme_rpc 00:13:07.973 ************************************ 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71472 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71472 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71472 ']' 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.973 13:54:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.973 [2024-12-11 13:54:00.921068] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:07.973 [2024-12-11 13:54:00.921197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71472 ] 00:13:08.231 [2024-12-11 13:54:01.105202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.231 [2024-12-11 13:54:01.215618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.168 xnvme_bdev 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:09.168 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71472 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71472 ']' 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71472 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71472 00:13:09.427 killing process with pid 71472 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71472' 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71472 00:13:09.427 13:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71472 00:13:11.964 ************************************ 00:13:11.964 END TEST xnvme_rpc 00:13:11.964 ************************************ 00:13:11.964 00:13:11.964 real 0m3.958s 00:13:11.964 user 0m4.121s 00:13:11.964 sys 0m0.537s 00:13:11.964 13:54:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.964 13:54:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.964 13:54:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:11.964 13:54:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:11.964 13:54:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.964 13:54:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.964 ************************************ 00:13:11.964 START TEST xnvme_bdevperf 00:13:11.964 ************************************ 00:13:11.964 13:54:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:11.964 13:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:11.964 13:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:11.964 13:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:11.964 13:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:11.964 13:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:11.964 13:54:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:11.964 13:54:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:11.964 { 00:13:11.964 "subsystems": [ 00:13:11.964 { 00:13:11.964 "subsystem": "bdev", 00:13:11.964 "config": [ 00:13:11.964 { 00:13:11.964 "params": { 00:13:11.964 "io_mechanism": "libaio", 00:13:11.964 "conserve_cpu": false, 00:13:11.964 "filename": "/dev/nvme0n1", 00:13:11.964 "name": "xnvme_bdev" 00:13:11.964 }, 00:13:11.964 "method": "bdev_xnvme_create" 00:13:11.964 }, 00:13:11.964 { 00:13:11.964 "method": "bdev_wait_for_examine" 00:13:11.964 } 00:13:11.964 ] 00:13:11.964 } 00:13:11.964 ] 00:13:11.964 } 00:13:11.964 [2024-12-11 13:54:04.940772] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:11.964 [2024-12-11 13:54:04.940917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71553 ] 00:13:12.224 [2024-12-11 13:54:05.120304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.224 [2024-12-11 13:54:05.235074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.792 Running I/O for 5 seconds... 00:13:14.668 44614.00 IOPS, 174.27 MiB/s [2024-12-11T13:54:08.654Z] 42466.50 IOPS, 165.88 MiB/s [2024-12-11T13:54:10.034Z] 42260.00 IOPS, 165.08 MiB/s [2024-12-11T13:54:10.972Z] 42945.00 IOPS, 167.75 MiB/s [2024-12-11T13:54:10.972Z] 43175.20 IOPS, 168.65 MiB/s 00:13:17.925 Latency(us) 00:13:17.925 [2024-12-11T13:54:10.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.925 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:17.925 xnvme_bdev : 5.00 43149.04 168.55 0.00 0.00 1479.78 330.64 3342.60 00:13:17.925 [2024-12-11T13:54:10.972Z] =================================================================================================================== 00:13:17.925 [2024-12-11T13:54:10.972Z] Total : 43149.04 168.55 0.00 0.00 1479.78 330.64 3342.60 00:13:18.860 13:54:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:18.860 13:54:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:18.860 13:54:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:18.860 13:54:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:18.860 13:54:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:18.860 { 00:13:18.860 "subsystems": [ 00:13:18.860 { 00:13:18.860 "subsystem": "bdev", 00:13:18.860 "config": [ 00:13:18.860 { 00:13:18.860 "params": { 00:13:18.860 "io_mechanism": "libaio", 00:13:18.860 "conserve_cpu": false, 00:13:18.860 "filename": "/dev/nvme0n1", 00:13:18.860 "name": "xnvme_bdev" 00:13:18.860 }, 00:13:18.860 "method": "bdev_xnvme_create" 00:13:18.860 }, 00:13:18.860 { 00:13:18.860 "method": "bdev_wait_for_examine" 00:13:18.860 } 00:13:18.860 ] 00:13:18.860 } 00:13:18.860 ] 00:13:18.860 } 00:13:18.860 [2024-12-11 13:54:11.818090] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:18.860 [2024-12-11 13:54:11.818363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71634 ] 00:13:19.119 [2024-12-11 13:54:11.999165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.119 [2024-12-11 13:54:12.107200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.685 Running I/O for 5 seconds... 00:13:21.556 44330.00 IOPS, 173.16 MiB/s [2024-12-11T13:54:15.538Z] 44201.00 IOPS, 172.66 MiB/s [2024-12-11T13:54:16.526Z] 44153.67 IOPS, 172.48 MiB/s [2024-12-11T13:54:17.462Z] 44106.25 IOPS, 172.29 MiB/s 00:13:24.416 Latency(us) 00:13:24.416 [2024-12-11T13:54:17.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.416 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:24.416 xnvme_bdev : 5.00 44030.15 171.99 0.00 0.00 1449.97 165.32 3039.92 00:13:24.416 [2024-12-11T13:54:17.463Z] =================================================================================================================== 00:13:24.416 [2024-12-11T13:54:17.463Z] Total : 44030.15 171.99 0.00 0.00 1449.97 165.32 3039.92 00:13:25.790 00:13:25.790 real 0m13.742s 00:13:25.790 user 0m4.888s 00:13:25.790 sys 0m5.839s 00:13:25.790 13:54:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.790 13:54:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:25.790 ************************************ 00:13:25.790 END TEST xnvme_bdevperf 00:13:25.790 ************************************ 00:13:25.790 13:54:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:25.790 13:54:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.790 13:54:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.790 13:54:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.790 ************************************ 00:13:25.790 START TEST xnvme_fio_plugin 00:13:25.790 ************************************ 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:25.790 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:25.791 13:54:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.791 { 00:13:25.791 "subsystems": [ 00:13:25.791 { 00:13:25.791 "subsystem": "bdev", 00:13:25.791 "config": [ 00:13:25.791 { 00:13:25.791 "params": { 00:13:25.791 "io_mechanism": "libaio", 00:13:25.791 "conserve_cpu": false, 00:13:25.791 "filename": "/dev/nvme0n1", 00:13:25.791 "name": "xnvme_bdev" 00:13:25.791 }, 00:13:25.791 "method": "bdev_xnvme_create" 00:13:25.791 }, 00:13:25.791 { 00:13:25.791 "method": "bdev_wait_for_examine" 00:13:25.791 } 00:13:25.791 ] 00:13:25.791 } 00:13:25.791 ] 00:13:25.791 } 00:13:26.048 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:26.048 fio-3.35 00:13:26.048 Starting 1 thread 00:13:32.613 00:13:32.613 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71753: Wed Dec 11 13:54:24 2024 00:13:32.613 read: IOPS=42.8k, BW=167MiB/s (175MB/s)(837MiB/5001msec) 00:13:32.613 slat (usec): min=4, max=784, avg=20.35, stdev=22.99 00:13:32.613 clat (usec): min=68, max=5966, avg=874.12, stdev=538.61 00:13:32.613 lat (usec): min=102, max=6028, avg=894.48, stdev=542.41 00:13:32.613 clat percentiles (usec): 00:13:32.613 | 1.00th=[ 172], 5.00th=[ 253], 10.00th=[ 326], 20.00th=[ 453], 00:13:32.613 | 30.00th=[ 570], 40.00th=[ 676], 50.00th=[ 783], 60.00th=[ 898], 00:13:32.613 | 70.00th=[ 1020], 80.00th=[ 1188], 90.00th=[ 1467], 95.00th=[ 1795], 00:13:32.613 | 99.00th=[ 2966], 99.50th=[ 3490], 99.90th=[ 4424], 99.95th=[ 4752], 00:13:32.613 | 99.99th=[ 5276] 00:13:32.613 bw ( KiB/s): min=154208, max=195184, per=100.00%, avg=173738.67, stdev=14396.16, samples=9 00:13:32.613 iops : min=38552, max=48796, avg=43434.67, stdev=3599.04, samples=9 00:13:32.613 lat (usec) : 100=0.05%, 250=4.78%, 500=19.28%, 750=22.64%, 1000=21.51% 00:13:32.613 lat (msec) : 2=28.31%, 4=3.22%, 10=0.22% 00:13:32.613 cpu : usr=25.96%, sys=52.10%, ctx=75, majf=0, minf=764 00:13:32.613 IO depths : 1=0.1%, 2=1.2%, 4=4.3%, 8=11.1%, 16=25.8%, 32=55.7%, >=64=1.8% 00:13:32.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.613 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:13:32.613 issued rwts: total=214202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.613 00:13:32.613 Run status group 0 (all jobs): 00:13:32.613 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=837MiB (877MB), run=5001-5001msec 00:13:33.182 ----------------------------------------------------- 00:13:33.182 Suppressions used: 00:13:33.182 count bytes template 00:13:33.182 1 11 /usr/src/fio/parse.c 00:13:33.182 1 8 libtcmalloc_minimal.so 00:13:33.182 1 904 libcrypto.so 00:13:33.182 ----------------------------------------------------- 00:13:33.182 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:33.182 13:54:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.182 { 00:13:33.182 "subsystems": [ 00:13:33.182 { 00:13:33.182 "subsystem": "bdev", 00:13:33.182 "config": [ 00:13:33.182 { 00:13:33.182 "params": { 00:13:33.182 "io_mechanism": "libaio", 00:13:33.182 "conserve_cpu": false, 00:13:33.182 "filename": "/dev/nvme0n1", 00:13:33.182 "name": "xnvme_bdev" 00:13:33.182 }, 00:13:33.182 "method": "bdev_xnvme_create" 00:13:33.182 }, 00:13:33.182 { 00:13:33.182 "method": "bdev_wait_for_examine" 00:13:33.182 } 00:13:33.182 ] 00:13:33.182 } 00:13:33.182 ] 00:13:33.182 } 00:13:33.441 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:33.441 fio-3.35 00:13:33.441 Starting 1 thread 00:13:40.033 00:13:40.033 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71851: Wed Dec 11 13:54:32 2024 00:13:40.033 write: IOPS=44.9k, BW=176MiB/s (184MB/s)(878MiB/5001msec); 0 zone resets 00:13:40.033 slat (usec): min=4, max=1108, avg=19.35, stdev=24.52 00:13:40.033 clat (usec): min=35, max=6028, avg=843.86, stdev=533.24 00:13:40.033 lat (usec): min=76, max=6089, avg=863.22, stdev=537.36 00:13:40.033 clat percentiles (usec): 00:13:40.033 | 1.00th=[ 178], 5.00th=[ 255], 10.00th=[ 326], 20.00th=[ 449], 00:13:40.033 | 30.00th=[ 553], 40.00th=[ 660], 50.00th=[ 758], 60.00th=[ 865], 00:13:40.033 | 70.00th=[ 979], 80.00th=[ 1106], 90.00th=[ 1352], 95.00th=[ 1713], 00:13:40.033 | 99.00th=[ 3097], 99.50th=[ 3720], 99.90th=[ 4555], 99.95th=[ 4817], 00:13:40.033 | 99.99th=[ 5211] 00:13:40.033 bw ( KiB/s): min=156368, max=200016, per=100.00%, avg=179887.67, stdev=15211.92, samples=9 00:13:40.033 iops : min=39092, max=50004, avg=44971.89, stdev=3802.97, samples=9 00:13:40.033 lat (usec) : 50=0.01%, 100=0.03%, 250=4.68%, 500=20.06%, 750=24.14% 00:13:40.033 lat (usec) : 1000=23.15% 00:13:40.033 lat (msec) : 2=24.54%, 4=3.07%, 10=0.33% 00:13:40.033 cpu : usr=27.26%, sys=52.04%, ctx=105, majf=0, minf=765 00:13:40.033 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=10.7%, 16=25.9%, 32=56.7%, >=64=1.8% 00:13:40.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.033 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:13:40.033 issued rwts: total=0,224785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.033 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.033 00:13:40.033 Run status group 0 (all jobs): 00:13:40.033 WRITE: bw=176MiB/s (184MB/s), 176MiB/s-176MiB/s (184MB/s-184MB/s), io=878MiB (921MB), run=5001-5001msec 00:13:40.600 ----------------------------------------------------- 00:13:40.600 Suppressions used: 00:13:40.600 count bytes template 00:13:40.601 1 11 /usr/src/fio/parse.c 00:13:40.601 1 8 libtcmalloc_minimal.so 00:13:40.601 1 904 libcrypto.so 00:13:40.601 ----------------------------------------------------- 00:13:40.601 00:13:40.601 00:13:40.601 real 0m14.880s 00:13:40.601 user 0m6.429s 00:13:40.601 sys 0m5.971s 00:13:40.601 ************************************ 00:13:40.601 END TEST xnvme_fio_plugin 00:13:40.601 ************************************ 00:13:40.601 13:54:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.601 13:54:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:40.601 13:54:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:40.601 13:54:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:40.601 13:54:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:40.601 13:54:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:40.601 13:54:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.601 13:54:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.601 13:54:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.601 ************************************ 00:13:40.601 START TEST xnvme_rpc 00:13:40.601 ************************************ 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71943 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:40.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71943 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71943 ']' 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.601 13:54:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.859 [2024-12-11 13:54:33.698528] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:40.859 [2024-12-11 13:54:33.698658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71943 ] 00:13:40.859 [2024-12-11 13:54:33.879042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.118 [2024-12-11 13:54:33.997567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.052 xnvme_bdev 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.052 13:54:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:42.052 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.052 13:54:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:42.052 13:54:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:42.052 13:54:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.052 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.052 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.052 13:54:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:42.052 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.052 13:54:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:42.053 13:54:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:42.053 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.053 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.053 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.053 13:54:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71943 00:13:42.053 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71943 ']' 00:13:42.053 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71943 00:13:42.053 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:42.053 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.053 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71943 00:13:42.311 killing process with pid 71943 00:13:42.311 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.311 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.311 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71943' 00:13:42.311 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71943 00:13:42.311 13:54:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71943 00:13:44.845 00:13:44.845 real 0m3.935s 00:13:44.845 user 0m3.974s 00:13:44.845 sys 0m0.539s 00:13:44.845 13:54:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.845 ************************************ 00:13:44.845 END TEST xnvme_rpc 00:13:44.845 ************************************ 00:13:44.845 13:54:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.845 13:54:37 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:44.845 13:54:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.845 13:54:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.845 13:54:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.845 ************************************ 00:13:44.845 START TEST xnvme_bdevperf 00:13:44.845 ************************************ 00:13:44.845 13:54:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:44.845 13:54:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:44.845 13:54:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:44.845 13:54:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.845 13:54:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:44.845 13:54:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:44.845 13:54:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:44.845 13:54:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:44.845 { 00:13:44.845 "subsystems": [ 00:13:44.845 { 00:13:44.845 "subsystem": "bdev", 00:13:44.845 "config": [ 00:13:44.846 { 00:13:44.846 "params": { 00:13:44.846 "io_mechanism": "libaio", 00:13:44.846 "conserve_cpu": true, 00:13:44.846 "filename": "/dev/nvme0n1", 00:13:44.846 "name": "xnvme_bdev" 00:13:44.846 }, 00:13:44.846 "method": "bdev_xnvme_create" 00:13:44.846 }, 00:13:44.846 { 00:13:44.846 "method": "bdev_wait_for_examine" 00:13:44.846 } 00:13:44.846 ] 00:13:44.846 } 00:13:44.846 ] 00:13:44.846 } 00:13:44.846 [2024-12-11 13:54:37.704591] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:44.846 [2024-12-11 13:54:37.704724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72027 ] 00:13:44.846 [2024-12-11 13:54:37.888605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.104 [2024-12-11 13:54:38.008096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.364 Running I/O for 5 seconds... 00:13:47.677 43611.00 IOPS, 170.36 MiB/s [2024-12-11T13:54:41.660Z] 43676.50 IOPS, 170.61 MiB/s [2024-12-11T13:54:42.596Z] 43691.67 IOPS, 170.67 MiB/s [2024-12-11T13:54:43.533Z] 43582.50 IOPS, 170.24 MiB/s 00:13:50.486 Latency(us) 00:13:50.486 [2024-12-11T13:54:43.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.486 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:50.486 xnvme_bdev : 5.00 43565.59 170.18 0.00 0.00 1465.62 184.24 3289.96 00:13:50.486 [2024-12-11T13:54:43.533Z] =================================================================================================================== 00:13:50.486 [2024-12-11T13:54:43.533Z] Total : 43565.59 170.18 0.00 0.00 1465.62 184.24 3289.96 00:13:51.865 13:54:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.865 13:54:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:51.865 13:54:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:51.865 13:54:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:51.865 13:54:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:51.865 { 00:13:51.865 "subsystems": [ 00:13:51.865 { 00:13:51.865 "subsystem": "bdev", 00:13:51.865 "config": [ 00:13:51.865 { 00:13:51.865 "params": { 00:13:51.865 "io_mechanism": "libaio", 00:13:51.865 "conserve_cpu": true, 00:13:51.865 "filename": "/dev/nvme0n1", 00:13:51.865 "name": "xnvme_bdev" 00:13:51.865 }, 00:13:51.865 "method": "bdev_xnvme_create" 00:13:51.865 }, 00:13:51.865 { 00:13:51.865 "method": "bdev_wait_for_examine" 00:13:51.865 } 00:13:51.865 ] 00:13:51.865 } 00:13:51.865 ] 00:13:51.865 } 00:13:51.865 [2024-12-11 13:54:44.626404] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:51.865 [2024-12-11 13:54:44.626709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72103 ] 00:13:51.865 [2024-12-11 13:54:44.806181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.124 [2024-12-11 13:54:44.915428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.383 Running I/O for 5 seconds... 00:13:54.253 44902.00 IOPS, 175.40 MiB/s [2024-12-11T13:54:48.677Z] 44924.50 IOPS, 175.49 MiB/s [2024-12-11T13:54:49.613Z] 44859.33 IOPS, 175.23 MiB/s [2024-12-11T13:54:50.551Z] 44840.75 IOPS, 175.16 MiB/s [2024-12-11T13:54:50.551Z] 44609.00 IOPS, 174.25 MiB/s 00:13:57.504 Latency(us) 00:13:57.504 [2024-12-11T13:54:50.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.504 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:57.504 xnvme_bdev : 5.00 44594.85 174.20 0.00 0.00 1431.65 407.96 5000.74 00:13:57.504 [2024-12-11T13:54:50.551Z] =================================================================================================================== 00:13:57.504 [2024-12-11T13:54:50.551Z] Total : 44594.85 174.20 0.00 0.00 1431.65 407.96 5000.74 00:13:58.441 00:13:58.441 real 0m13.797s 00:13:58.441 user 0m4.962s 00:13:58.441 sys 0m5.773s 00:13:58.441 13:54:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.441 13:54:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:58.441 ************************************ 00:13:58.441 END TEST xnvme_bdevperf 00:13:58.441 ************************************ 00:13:58.441 13:54:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:58.441 13:54:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.441 13:54:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.441 13:54:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.441 ************************************ 00:13:58.441 START TEST xnvme_fio_plugin 00:13:58.441 ************************************ 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:58.441 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:58.701 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:58.701 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:58.701 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:58.701 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:58.701 13:54:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:58.701 { 00:13:58.701 "subsystems": [ 00:13:58.701 { 00:13:58.701 "subsystem": "bdev", 00:13:58.701 "config": [ 00:13:58.701 { 00:13:58.701 "params": { 00:13:58.701 "io_mechanism": "libaio", 00:13:58.701 "conserve_cpu": true, 00:13:58.701 "filename": "/dev/nvme0n1", 00:13:58.701 "name": "xnvme_bdev" 00:13:58.701 }, 00:13:58.701 "method": "bdev_xnvme_create" 00:13:58.701 }, 00:13:58.701 { 00:13:58.701 "method": "bdev_wait_for_examine" 00:13:58.701 } 00:13:58.701 ] 00:13:58.701 } 00:13:58.701 ] 00:13:58.701 } 00:13:58.701 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:58.701 fio-3.35 00:13:58.701 Starting 1 thread 00:14:05.290 00:14:05.290 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72228: Wed Dec 11 13:54:57 2024 00:14:05.290 read: IOPS=51.8k, BW=202MiB/s (212MB/s)(1011MiB/5001msec) 00:14:05.290 slat (usec): min=4, max=737, avg=16.85, stdev=21.40 00:14:05.290 clat (usec): min=59, max=7176, avg=746.16, stdev=467.29 00:14:05.290 lat (usec): min=81, max=7243, avg=763.02, stdev=470.68 00:14:05.290 clat percentiles (usec): 00:14:05.290 | 1.00th=[ 163], 5.00th=[ 241], 10.00th=[ 310], 20.00th=[ 412], 00:14:05.290 | 30.00th=[ 502], 40.00th=[ 586], 50.00th=[ 668], 60.00th=[ 758], 00:14:05.290 | 70.00th=[ 857], 80.00th=[ 979], 90.00th=[ 1188], 95.00th=[ 1467], 00:14:05.290 | 99.00th=[ 2737], 99.50th=[ 3326], 99.90th=[ 4146], 99.95th=[ 4424], 00:14:05.290 | 99.99th=[ 5080] 00:14:05.290 bw ( KiB/s): min=192712, max=225477, per=100.00%, avg=210506.33, stdev=11624.09, samples=9 00:14:05.290 iops : min=48178, max=56369, avg=52626.56, stdev=2905.98, samples=9 00:14:05.290 lat (usec) : 100=0.05%, 250=5.61%, 500=24.37%, 750=29.42%, 1000=22.04% 00:14:05.290 lat (msec) : 2=16.16%, 4=2.22%, 10=0.14% 00:14:05.290 cpu : usr=26.62%, sys=52.82%, ctx=168, majf=0, minf=764 00:14:05.290 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=10.0%, 16=25.4%, 32=58.2%, >=64=1.9% 00:14:05.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.290 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:05.290 issued rwts: total=258819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.290 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.290 00:14:05.290 Run status group 0 (all jobs): 00:14:05.290 READ: bw=202MiB/s (212MB/s), 202MiB/s-202MiB/s (212MB/s-212MB/s), io=1011MiB (1060MB), run=5001-5001msec 00:14:05.900 ----------------------------------------------------- 00:14:05.900 Suppressions used: 00:14:05.900 count bytes template 00:14:05.900 1 11 /usr/src/fio/parse.c 00:14:05.900 1 8 libtcmalloc_minimal.so 00:14:05.900 1 904 libcrypto.so 00:14:05.900 ----------------------------------------------------- 00:14:05.900 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:05.900 { 00:14:05.900 "subsystems": [ 00:14:05.900 { 00:14:05.900 "subsystem": "bdev", 00:14:05.900 "config": [ 00:14:05.900 { 00:14:05.900 "params": { 00:14:05.900 "io_mechanism": "libaio", 00:14:05.900 "conserve_cpu": true, 00:14:05.900 "filename": "/dev/nvme0n1", 00:14:05.900 "name": "xnvme_bdev" 00:14:05.900 }, 00:14:05.900 "method": "bdev_xnvme_create" 00:14:05.900 }, 00:14:05.900 { 00:14:05.900 "method": "bdev_wait_for_examine" 00:14:05.900 } 00:14:05.900 ] 00:14:05.900 } 00:14:05.900 ] 00:14:05.900 } 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:05.900 13:54:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:06.174 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:06.174 fio-3.35 00:14:06.174 Starting 1 thread 00:14:12.741 00:14:12.741 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72321: Wed Dec 11 13:55:04 2024 00:14:12.741 write: IOPS=50.4k, BW=197MiB/s (207MB/s)(985MiB/5001msec); 0 zone resets 00:14:12.741 slat (usec): min=4, max=873, avg=17.07, stdev=23.82 00:14:12.741 clat (usec): min=85, max=5695, avg=775.44, stdev=484.23 00:14:12.741 lat (usec): min=132, max=5789, avg=792.51, stdev=487.71 00:14:12.741 clat percentiles (usec): 00:14:12.741 | 1.00th=[ 174], 5.00th=[ 251], 10.00th=[ 322], 20.00th=[ 433], 00:14:12.741 | 30.00th=[ 523], 40.00th=[ 611], 50.00th=[ 701], 60.00th=[ 791], 00:14:12.741 | 70.00th=[ 889], 80.00th=[ 1012], 90.00th=[ 1205], 95.00th=[ 1483], 00:14:12.741 | 99.00th=[ 2868], 99.50th=[ 3490], 99.90th=[ 4359], 99.95th=[ 4621], 00:14:12.741 | 99.99th=[ 5014] 00:14:12.741 bw ( KiB/s): min=185588, max=218248, per=100.00%, avg=203473.33, stdev=10717.59, samples=9 00:14:12.741 iops : min=46397, max=54570, avg=50868.44, stdev=2680.79, samples=9 00:14:12.741 lat (usec) : 100=0.03%, 250=4.91%, 500=22.38%, 750=28.27%, 1000=23.81% 00:14:12.741 lat (msec) : 2=17.94%, 4=2.41%, 10=0.24% 00:14:12.741 cpu : usr=28.32%, sys=51.64%, ctx=80, majf=0, minf=765 00:14:12.741 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=10.0%, 16=25.3%, 32=58.2%, >=64=1.9% 00:14:12.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.741 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:12.741 issued rwts: total=0,252206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.741 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:12.741 00:14:12.741 Run status group 0 (all jobs): 00:14:12.742 WRITE: bw=197MiB/s (207MB/s), 197MiB/s-197MiB/s (207MB/s-207MB/s), io=985MiB (1033MB), run=5001-5001msec 00:14:13.308 ----------------------------------------------------- 00:14:13.308 Suppressions used: 00:14:13.308 count bytes template 00:14:13.308 1 11 /usr/src/fio/parse.c 00:14:13.308 1 8 libtcmalloc_minimal.so 00:14:13.308 1 904 libcrypto.so 00:14:13.308 ----------------------------------------------------- 00:14:13.308 00:14:13.308 00:14:13.308 real 0m14.763s 00:14:13.308 user 0m6.456s 00:14:13.308 sys 0m5.952s 00:14:13.308 13:55:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.308 ************************************ 00:14:13.308 END TEST xnvme_fio_plugin 00:14:13.308 ************************************ 00:14:13.308 13:55:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:13.308 13:55:06 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:13.308 13:55:06 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:13.308 13:55:06 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:13.308 13:55:06 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:13.308 13:55:06 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:13.308 13:55:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:13.308 13:55:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:13.308 13:55:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:13.308 13:55:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:13.308 13:55:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:13.308 13:55:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.308 13:55:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:13.308 ************************************ 00:14:13.308 START TEST xnvme_rpc 00:14:13.308 ************************************ 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72413 00:14:13.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72413 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72413 ']' 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.308 13:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.567 [2024-12-11 13:55:06.409256] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:13.567 [2024-12-11 13:55:06.409579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72413 ] 00:14:13.567 [2024-12-11 13:55:06.586215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.826 [2024-12-11 13:55:06.704673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.761 xnvme_bdev 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72413 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72413 ']' 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72413 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.761 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72413 00:14:15.019 killing process with pid 72413 00:14:15.019 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.019 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.019 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72413' 00:14:15.019 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72413 00:14:15.019 13:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72413 00:14:17.551 00:14:17.551 real 0m4.052s 00:14:17.551 user 0m4.106s 00:14:17.551 sys 0m0.562s 00:14:17.551 13:55:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.551 ************************************ 00:14:17.551 13:55:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.551 END TEST xnvme_rpc 00:14:17.551 ************************************ 00:14:17.551 13:55:10 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:17.552 13:55:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:17.552 13:55:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.552 13:55:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.552 ************************************ 00:14:17.552 START TEST xnvme_bdevperf 00:14:17.552 ************************************ 00:14:17.552 13:55:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:17.552 13:55:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:17.552 13:55:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:17.552 13:55:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:17.552 13:55:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:17.552 13:55:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:17.552 13:55:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:17.552 13:55:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:17.552 { 00:14:17.552 "subsystems": [ 00:14:17.552 { 00:14:17.552 "subsystem": "bdev", 00:14:17.552 "config": [ 00:14:17.552 { 00:14:17.552 "params": { 00:14:17.552 "io_mechanism": "io_uring", 00:14:17.552 "conserve_cpu": false, 00:14:17.552 "filename": "/dev/nvme0n1", 00:14:17.552 "name": "xnvme_bdev" 00:14:17.552 }, 00:14:17.552 "method": "bdev_xnvme_create" 00:14:17.552 }, 00:14:17.552 { 00:14:17.552 "method": "bdev_wait_for_examine" 00:14:17.552 } 00:14:17.552 ] 00:14:17.552 } 00:14:17.552 ] 00:14:17.552 } 00:14:17.552 [2024-12-11 13:55:10.520515] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:17.552 [2024-12-11 13:55:10.520652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72498 ] 00:14:17.810 [2024-12-11 13:55:10.700866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.810 [2024-12-11 13:55:10.822198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.376 Running I/O for 5 seconds... 00:14:20.242 39511.00 IOPS, 154.34 MiB/s [2024-12-11T13:55:14.225Z] 40180.50 IOPS, 156.96 MiB/s [2024-12-11T13:55:15.605Z] 40796.33 IOPS, 159.36 MiB/s [2024-12-11T13:55:16.538Z] 42862.75 IOPS, 167.43 MiB/s [2024-12-11T13:55:16.538Z] 42345.80 IOPS, 165.41 MiB/s 00:14:23.491 Latency(us) 00:14:23.491 [2024-12-11T13:55:16.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.491 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:23.491 xnvme_bdev : 5.01 42289.09 165.19 0.00 0.00 1509.08 343.80 6448.32 00:14:23.491 [2024-12-11T13:55:16.538Z] =================================================================================================================== 00:14:23.491 [2024-12-11T13:55:16.538Z] Total : 42289.09 165.19 0.00 0.00 1509.08 343.80 6448.32 00:14:24.425 13:55:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:24.425 13:55:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:24.425 13:55:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:24.425 13:55:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:24.425 13:55:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:24.425 { 00:14:24.425 "subsystems": [ 00:14:24.425 { 00:14:24.425 "subsystem": "bdev", 00:14:24.425 "config": [ 00:14:24.425 { 00:14:24.425 "params": { 00:14:24.425 "io_mechanism": "io_uring", 00:14:24.425 "conserve_cpu": false, 00:14:24.425 "filename": "/dev/nvme0n1", 00:14:24.425 "name": "xnvme_bdev" 00:14:24.425 }, 00:14:24.425 "method": "bdev_xnvme_create" 00:14:24.425 }, 00:14:24.425 { 00:14:24.425 "method": "bdev_wait_for_examine" 00:14:24.425 } 00:14:24.425 ] 00:14:24.425 } 00:14:24.425 ] 00:14:24.425 } 00:14:24.425 [2024-12-11 13:55:17.433460] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:24.425 [2024-12-11 13:55:17.433576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72573 ] 00:14:24.683 [2024-12-11 13:55:17.615596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.942 [2024-12-11 13:55:17.729740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.201 Running I/O for 5 seconds... 00:14:27.074 27200.00 IOPS, 106.25 MiB/s [2024-12-11T13:55:21.502Z] 27584.00 IOPS, 107.75 MiB/s [2024-12-11T13:55:22.439Z] 27541.33 IOPS, 107.58 MiB/s [2024-12-11T13:55:23.376Z] 27424.00 IOPS, 107.12 MiB/s [2024-12-11T13:55:23.376Z] 27328.00 IOPS, 106.75 MiB/s 00:14:30.329 Latency(us) 00:14:30.329 [2024-12-11T13:55:23.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.329 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:30.329 xnvme_bdev : 5.01 27300.28 106.64 0.00 0.00 2337.42 1348.88 6422.00 00:14:30.329 [2024-12-11T13:55:23.376Z] =================================================================================================================== 00:14:30.329 [2024-12-11T13:55:23.376Z] Total : 27300.28 106.64 0.00 0.00 2337.42 1348.88 6422.00 00:14:31.266 ************************************ 00:14:31.266 END TEST xnvme_bdevperf 00:14:31.266 ************************************ 00:14:31.266 00:14:31.266 real 0m13.799s 00:14:31.266 user 0m6.447s 00:14:31.266 sys 0m7.134s 00:14:31.266 13:55:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.266 13:55:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:31.266 13:55:24 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:31.266 13:55:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:31.266 13:55:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.266 13:55:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.266 ************************************ 00:14:31.266 START TEST xnvme_fio_plugin 00:14:31.266 ************************************ 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:31.266 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:31.525 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:31.525 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:31.525 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:31.525 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:31.525 13:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.525 { 00:14:31.525 "subsystems": [ 00:14:31.525 { 00:14:31.525 "subsystem": "bdev", 00:14:31.525 "config": [ 00:14:31.525 { 00:14:31.525 "params": { 00:14:31.525 "io_mechanism": "io_uring", 00:14:31.525 "conserve_cpu": false, 00:14:31.525 "filename": "/dev/nvme0n1", 00:14:31.525 "name": "xnvme_bdev" 00:14:31.525 }, 00:14:31.525 "method": "bdev_xnvme_create" 00:14:31.525 }, 00:14:31.525 { 00:14:31.525 "method": "bdev_wait_for_examine" 00:14:31.525 } 00:14:31.525 ] 00:14:31.525 } 00:14:31.525 ] 00:14:31.525 } 00:14:31.525 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:31.525 fio-3.35 00:14:31.525 Starting 1 thread 00:14:38.087 00:14:38.087 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72698: Wed Dec 11 13:55:30 2024 00:14:38.087 read: IOPS=27.4k, BW=107MiB/s (112MB/s)(535MiB/5001msec) 00:14:38.087 slat (nsec): min=3557, max=95000, avg=6224.14, stdev=2011.62 00:14:38.087 clat (usec): min=1471, max=8248, avg=2091.85, stdev=254.28 00:14:38.087 lat (usec): min=1479, max=8256, avg=2098.08, stdev=254.69 00:14:38.087 clat percentiles (usec): 00:14:38.087 | 1.00th=[ 1696], 5.00th=[ 1795], 10.00th=[ 1844], 20.00th=[ 1909], 00:14:38.087 | 30.00th=[ 1975], 40.00th=[ 2024], 50.00th=[ 2073], 60.00th=[ 2114], 00:14:38.087 | 70.00th=[ 2180], 80.00th=[ 2245], 90.00th=[ 2343], 95.00th=[ 2474], 00:14:38.087 | 99.00th=[ 2868], 99.50th=[ 3032], 99.90th=[ 3294], 99.95th=[ 3392], 00:14:38.087 | 99.99th=[ 8160] 00:14:38.087 bw ( KiB/s): min=102400, max=116736, per=100.00%, avg=109909.33, stdev=4536.33, samples=9 00:14:38.087 iops : min=25600, max=29184, avg=27477.33, stdev=1134.08, samples=9 00:14:38.087 lat (msec) : 2=36.63%, 4=63.32%, 10=0.05% 00:14:38.087 cpu : usr=30.90%, sys=68.02%, ctx=31, majf=0, minf=762 00:14:38.087 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:38.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.087 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:38.087 issued rwts: total=137024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:38.087 00:14:38.087 Run status group 0 (all jobs): 00:14:38.087 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=535MiB (561MB), run=5001-5001msec 00:14:38.674 ----------------------------------------------------- 00:14:38.674 Suppressions used: 00:14:38.674 count bytes template 00:14:38.674 1 11 /usr/src/fio/parse.c 00:14:38.674 1 8 libtcmalloc_minimal.so 00:14:38.674 1 904 libcrypto.so 00:14:38.674 ----------------------------------------------------- 00:14:38.674 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:38.674 { 00:14:38.674 "subsystems": [ 00:14:38.674 { 00:14:38.674 "subsystem": "bdev", 00:14:38.674 "config": [ 00:14:38.674 { 00:14:38.674 "params": { 00:14:38.674 "io_mechanism": "io_uring", 00:14:38.674 "conserve_cpu": false, 00:14:38.674 "filename": "/dev/nvme0n1", 00:14:38.674 "name": "xnvme_bdev" 00:14:38.674 }, 00:14:38.674 "method": "bdev_xnvme_create" 00:14:38.674 }, 00:14:38.674 { 00:14:38.674 "method": "bdev_wait_for_examine" 00:14:38.674 } 00:14:38.674 ] 00:14:38.674 } 00:14:38.674 ] 00:14:38.674 } 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:38.674 13:55:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.933 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:38.933 fio-3.35 00:14:38.933 Starting 1 thread 00:14:45.508 00:14:45.508 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72795: Wed Dec 11 13:55:37 2024 00:14:45.508 write: IOPS=29.2k, BW=114MiB/s (120MB/s)(571MiB/5001msec); 0 zone resets 00:14:45.508 slat (usec): min=3, max=127, avg= 5.74, stdev= 1.95 00:14:45.508 clat (usec): min=1280, max=8070, avg=1961.44, stdev=295.16 00:14:45.508 lat (usec): min=1284, max=8078, avg=1967.18, stdev=295.94 00:14:45.508 clat percentiles (usec): 00:14:45.508 | 1.00th=[ 1418], 5.00th=[ 1516], 10.00th=[ 1582], 20.00th=[ 1696], 00:14:45.508 | 30.00th=[ 1827], 40.00th=[ 1909], 50.00th=[ 1975], 60.00th=[ 2040], 00:14:45.508 | 70.00th=[ 2114], 80.00th=[ 2180], 90.00th=[ 2278], 95.00th=[ 2376], 00:14:45.508 | 99.00th=[ 2606], 99.50th=[ 2704], 99.90th=[ 3032], 99.95th=[ 3228], 00:14:45.508 | 99.99th=[ 7963] 00:14:45.508 bw ( KiB/s): min=103936, max=137216, per=100.00%, avg=117705.22, stdev=12809.09, samples=9 00:14:45.508 iops : min=25984, max=34304, avg=29426.22, stdev=3202.22, samples=9 00:14:45.508 lat (msec) : 2=53.62%, 4=46.34%, 10=0.04% 00:14:45.508 cpu : usr=31.84%, sys=67.16%, ctx=17, majf=0, minf=763 00:14:45.508 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:45.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.508 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:45.508 issued rwts: total=0,146176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:45.508 00:14:45.508 Run status group 0 (all jobs): 00:14:45.508 WRITE: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=571MiB (599MB), run=5001-5001msec 00:14:46.099 ----------------------------------------------------- 00:14:46.099 Suppressions used: 00:14:46.099 count bytes template 00:14:46.099 1 11 /usr/src/fio/parse.c 00:14:46.099 1 8 libtcmalloc_minimal.so 00:14:46.099 1 904 libcrypto.so 00:14:46.099 ----------------------------------------------------- 00:14:46.099 00:14:46.099 ************************************ 00:14:46.099 00:14:46.099 real 0m14.731s 00:14:46.099 user 0m6.888s 00:14:46.099 sys 0m7.467s 00:14:46.099 13:55:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.099 13:55:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:46.099 END TEST xnvme_fio_plugin 00:14:46.099 ************************************ 00:14:46.099 13:55:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:46.099 13:55:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:46.099 13:55:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:46.099 13:55:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:46.099 13:55:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:46.099 13:55:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.099 13:55:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.099 ************************************ 00:14:46.099 START TEST xnvme_rpc 00:14:46.099 ************************************ 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72877 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72877 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72877 ']' 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.099 13:55:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.358 [2024-12-11 13:55:39.205367] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:46.358 [2024-12-11 13:55:39.205503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72877 ] 00:14:46.358 [2024-12-11 13:55:39.389226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.616 [2024-12-11 13:55:39.495715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.551 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.551 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:47.551 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:47.551 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.551 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.551 xnvme_bdev 00:14:47.551 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.551 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:47.551 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.551 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:47.551 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72877 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72877 ']' 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72877 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.552 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72877 00:14:47.810 killing process with pid 72877 00:14:47.810 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.810 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.810 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72877' 00:14:47.810 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72877 00:14:47.810 13:55:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72877 00:14:50.344 ************************************ 00:14:50.344 END TEST xnvme_rpc 00:14:50.344 ************************************ 00:14:50.344 00:14:50.344 real 0m3.920s 00:14:50.344 user 0m3.980s 00:14:50.344 sys 0m0.532s 00:14:50.344 13:55:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.344 13:55:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.344 13:55:43 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:50.344 13:55:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:50.344 13:55:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.344 13:55:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.344 ************************************ 00:14:50.344 START TEST xnvme_bdevperf 00:14:50.344 ************************************ 00:14:50.344 13:55:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:50.344 13:55:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:50.344 13:55:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:50.344 13:55:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:50.344 13:55:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:50.344 13:55:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:50.344 13:55:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:50.344 13:55:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:50.344 { 00:14:50.344 "subsystems": [ 00:14:50.344 { 00:14:50.344 "subsystem": "bdev", 00:14:50.344 "config": [ 00:14:50.344 { 00:14:50.344 "params": { 00:14:50.344 "io_mechanism": "io_uring", 00:14:50.344 "conserve_cpu": true, 00:14:50.344 "filename": "/dev/nvme0n1", 00:14:50.344 "name": "xnvme_bdev" 00:14:50.344 }, 00:14:50.344 "method": "bdev_xnvme_create" 00:14:50.344 }, 00:14:50.344 { 00:14:50.344 "method": "bdev_wait_for_examine" 00:14:50.344 } 00:14:50.344 ] 00:14:50.344 } 00:14:50.344 ] 00:14:50.344 } 00:14:50.344 [2024-12-11 13:55:43.196567] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:50.344 [2024-12-11 13:55:43.196684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72962 ] 00:14:50.344 [2024-12-11 13:55:43.376446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.602 [2024-12-11 13:55:43.483297] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.860 Running I/O for 5 seconds... 00:14:53.173 39736.00 IOPS, 155.22 MiB/s [2024-12-11T13:55:47.158Z] 42484.50 IOPS, 165.96 MiB/s [2024-12-11T13:55:48.095Z] 40104.00 IOPS, 156.66 MiB/s [2024-12-11T13:55:49.030Z] 39377.75 IOPS, 153.82 MiB/s 00:14:55.983 Latency(us) 00:14:55.983 [2024-12-11T13:55:49.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.983 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:55.983 xnvme_bdev : 5.00 38047.23 148.62 0.00 0.00 1677.75 107.75 8474.94 00:14:55.983 [2024-12-11T13:55:49.030Z] =================================================================================================================== 00:14:55.983 [2024-12-11T13:55:49.031Z] Total : 38047.23 148.62 0.00 0.00 1677.75 107.75 8474.94 00:14:57.364 13:55:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:57.364 13:55:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:57.364 13:55:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:57.364 13:55:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:57.364 13:55:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:57.364 { 00:14:57.364 "subsystems": [ 00:14:57.364 { 00:14:57.364 "subsystem": "bdev", 00:14:57.364 "config": [ 00:14:57.364 { 00:14:57.364 "params": { 00:14:57.364 "io_mechanism": "io_uring", 00:14:57.364 "conserve_cpu": true, 00:14:57.364 "filename": "/dev/nvme0n1", 00:14:57.364 "name": "xnvme_bdev" 00:14:57.364 }, 00:14:57.364 "method": "bdev_xnvme_create" 00:14:57.364 }, 00:14:57.364 { 00:14:57.364 "method": "bdev_wait_for_examine" 00:14:57.364 } 00:14:57.364 ] 00:14:57.364 } 00:14:57.364 ] 00:14:57.364 } 00:14:57.364 [2024-12-11 13:55:50.063175] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:57.364 [2024-12-11 13:55:50.063476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73043 ] 00:14:57.364 [2024-12-11 13:55:50.243788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.364 [2024-12-11 13:55:50.356006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.930 Running I/O for 5 seconds... 00:14:59.801 34128.00 IOPS, 133.31 MiB/s [2024-12-11T13:55:53.785Z] 31816.00 IOPS, 124.28 MiB/s [2024-12-11T13:55:54.722Z] 29658.67 IOPS, 115.85 MiB/s [2024-12-11T13:55:56.096Z] 28932.00 IOPS, 113.02 MiB/s [2024-12-11T13:55:56.096Z] 28137.60 IOPS, 109.91 MiB/s 00:15:03.049 Latency(us) 00:15:03.049 [2024-12-11T13:55:56.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.049 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:03.049 xnvme_bdev : 5.01 28103.46 109.78 0.00 0.00 2270.47 894.87 7474.79 00:15:03.049 [2024-12-11T13:55:56.096Z] =================================================================================================================== 00:15:03.049 [2024-12-11T13:55:56.096Z] Total : 28103.46 109.78 0.00 0.00 2270.47 894.87 7474.79 00:15:03.986 00:15:03.986 real 0m13.708s 00:15:03.986 user 0m7.934s 00:15:03.986 sys 0m5.291s 00:15:03.986 13:55:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.986 ************************************ 00:15:03.986 END TEST xnvme_bdevperf 00:15:03.986 ************************************ 00:15:03.986 13:55:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:03.986 13:55:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:03.986 13:55:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:03.986 13:55:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.986 13:55:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.986 ************************************ 00:15:03.986 START TEST xnvme_fio_plugin 00:15:03.986 ************************************ 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:03.986 { 00:15:03.986 "subsystems": [ 00:15:03.986 { 00:15:03.986 "subsystem": "bdev", 00:15:03.986 "config": [ 00:15:03.986 { 00:15:03.986 "params": { 00:15:03.986 "io_mechanism": "io_uring", 00:15:03.986 "conserve_cpu": true, 00:15:03.986 "filename": "/dev/nvme0n1", 00:15:03.986 "name": "xnvme_bdev" 00:15:03.986 }, 00:15:03.986 "method": "bdev_xnvme_create" 00:15:03.986 }, 00:15:03.986 { 00:15:03.986 "method": "bdev_wait_for_examine" 00:15:03.986 } 00:15:03.986 ] 00:15:03.986 } 00:15:03.986 ] 00:15:03.986 } 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:03.986 13:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.245 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:04.245 fio-3.35 00:15:04.245 Starting 1 thread 00:15:10.815 00:15:10.815 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73163: Wed Dec 11 13:56:02 2024 00:15:10.815 read: IOPS=24.4k, BW=95.3MiB/s (100.0MB/s)(477MiB/5001msec) 00:15:10.815 slat (nsec): min=3909, max=85998, avg=7819.50, stdev=3204.66 00:15:10.815 clat (usec): min=1274, max=4500, avg=2311.14, stdev=320.47 00:15:10.815 lat (usec): min=1280, max=4532, avg=2318.96, stdev=321.64 00:15:10.815 clat percentiles (usec): 00:15:10.815 | 1.00th=[ 1483], 5.00th=[ 1696], 10.00th=[ 1844], 20.00th=[ 2057], 00:15:10.815 | 30.00th=[ 2180], 40.00th=[ 2278], 50.00th=[ 2343], 60.00th=[ 2409], 00:15:10.815 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2704], 95.00th=[ 2769], 00:15:10.815 | 99.00th=[ 2868], 99.50th=[ 2933], 99.90th=[ 3523], 99.95th=[ 3851], 00:15:10.815 | 99.99th=[ 4359] 00:15:10.815 bw ( KiB/s): min=91648, max=105472, per=98.64%, avg=96291.78, stdev=4156.83, samples=9 00:15:10.815 iops : min=22912, max=26368, avg=24072.89, stdev=1039.23, samples=9 00:15:10.815 lat (msec) : 2=16.56%, 4=83.41%, 10=0.03% 00:15:10.815 cpu : usr=42.46%, sys=52.80%, ctx=11, majf=0, minf=762 00:15:10.815 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:10.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.815 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:10.815 issued rwts: total=122048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.815 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:10.815 00:15:10.815 Run status group 0 (all jobs): 00:15:10.815 READ: bw=95.3MiB/s (100.0MB/s), 95.3MiB/s-95.3MiB/s (100.0MB/s-100.0MB/s), io=477MiB (500MB), run=5001-5001msec 00:15:11.383 ----------------------------------------------------- 00:15:11.383 Suppressions used: 00:15:11.383 count bytes template 00:15:11.383 1 11 /usr/src/fio/parse.c 00:15:11.383 1 8 libtcmalloc_minimal.so 00:15:11.383 1 904 libcrypto.so 00:15:11.383 ----------------------------------------------------- 00:15:11.383 00:15:11.383 13:56:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:11.383 13:56:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:11.384 13:56:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:11.384 { 00:15:11.384 "subsystems": [ 00:15:11.384 { 00:15:11.384 "subsystem": "bdev", 00:15:11.384 "config": [ 00:15:11.384 { 00:15:11.384 "params": { 00:15:11.384 "io_mechanism": "io_uring", 00:15:11.384 "conserve_cpu": true, 00:15:11.384 "filename": "/dev/nvme0n1", 00:15:11.384 "name": "xnvme_bdev" 00:15:11.384 }, 00:15:11.384 "method": "bdev_xnvme_create" 00:15:11.384 }, 00:15:11.384 { 00:15:11.384 "method": "bdev_wait_for_examine" 00:15:11.384 } 00:15:11.384 ] 00:15:11.384 } 00:15:11.384 ] 00:15:11.384 } 00:15:11.643 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:11.643 fio-3.35 00:15:11.643 Starting 1 thread 00:15:18.211 00:15:18.211 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73257: Wed Dec 11 13:56:10 2024 00:15:18.211 write: IOPS=33.5k, BW=131MiB/s (137MB/s)(654MiB/5002msec); 0 zone resets 00:15:18.211 slat (nsec): min=3601, max=71018, avg=5101.87, stdev=1533.08 00:15:18.211 clat (usec): min=1100, max=5354, avg=1713.97, stdev=209.30 00:15:18.211 lat (usec): min=1105, max=5360, avg=1719.07, stdev=209.79 00:15:18.211 clat percentiles (usec): 00:15:18.211 | 1.00th=[ 1352], 5.00th=[ 1434], 10.00th=[ 1483], 20.00th=[ 1549], 00:15:18.211 | 30.00th=[ 1598], 40.00th=[ 1647], 50.00th=[ 1696], 60.00th=[ 1745], 00:15:18.211 | 70.00th=[ 1795], 80.00th=[ 1860], 90.00th=[ 1975], 95.00th=[ 2073], 00:15:18.211 | 99.00th=[ 2311], 99.50th=[ 2376], 99.90th=[ 2606], 99.95th=[ 3064], 00:15:18.211 | 99.99th=[ 5276] 00:15:18.211 bw ( KiB/s): min=121344, max=144896, per=99.18%, avg=132721.78, stdev=7872.90, samples=9 00:15:18.211 iops : min=30336, max=36224, avg=33180.44, stdev=1968.22, samples=9 00:15:18.211 lat (msec) : 2=92.01%, 4=7.95%, 10=0.04% 00:15:18.211 cpu : usr=48.61%, sys=48.25%, ctx=10, majf=0, minf=763 00:15:18.211 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:18.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.211 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:18.211 issued rwts: total=0,167343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.211 00:15:18.211 Run status group 0 (all jobs): 00:15:18.211 WRITE: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=654MiB (685MB), run=5002-5002msec 00:15:18.469 ----------------------------------------------------- 00:15:18.469 Suppressions used: 00:15:18.469 count bytes template 00:15:18.469 1 11 /usr/src/fio/parse.c 00:15:18.469 1 8 libtcmalloc_minimal.so 00:15:18.469 1 904 libcrypto.so 00:15:18.469 ----------------------------------------------------- 00:15:18.469 00:15:18.469 ************************************ 00:15:18.469 END TEST xnvme_fio_plugin 00:15:18.469 ************************************ 00:15:18.469 00:15:18.469 real 0m14.607s 00:15:18.469 user 0m8.236s 00:15:18.469 sys 0m5.714s 00:15:18.469 13:56:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.469 13:56:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:18.728 13:56:11 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:18.729 13:56:11 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:18.729 13:56:11 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:18.729 13:56:11 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:18.729 13:56:11 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:18.729 13:56:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:18.729 13:56:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:18.729 13:56:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:18.729 13:56:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:18.729 13:56:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:18.729 13:56:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.729 13:56:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:18.729 ************************************ 00:15:18.729 START TEST xnvme_rpc 00:15:18.729 ************************************ 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73347 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73347 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73347 ']' 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.729 13:56:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.729 [2024-12-11 13:56:11.669044] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:18.729 [2024-12-11 13:56:11.669171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73347 ] 00:15:19.010 [2024-12-11 13:56:11.847192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.010 [2024-12-11 13:56:11.959159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.955 xnvme_bdev 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73347 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73347 ']' 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73347 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.955 13:56:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73347 00:15:20.213 13:56:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.213 13:56:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.213 killing process with pid 73347 00:15:20.213 13:56:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73347' 00:15:20.213 13:56:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73347 00:15:20.213 13:56:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73347 00:15:22.745 00:15:22.745 real 0m3.850s 00:15:22.745 user 0m3.906s 00:15:22.745 sys 0m0.544s 00:15:22.745 13:56:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.745 ************************************ 00:15:22.745 END TEST xnvme_rpc 00:15:22.745 13:56:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.745 ************************************ 00:15:22.745 13:56:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:22.745 13:56:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:22.745 13:56:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.745 13:56:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:22.745 ************************************ 00:15:22.745 START TEST xnvme_bdevperf 00:15:22.745 ************************************ 00:15:22.745 13:56:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:22.745 13:56:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:22.745 13:56:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:22.745 13:56:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:22.745 13:56:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:22.745 13:56:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:22.745 13:56:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:22.745 13:56:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:22.745 { 00:15:22.745 "subsystems": [ 00:15:22.745 { 00:15:22.745 "subsystem": "bdev", 00:15:22.745 "config": [ 00:15:22.746 { 00:15:22.746 "params": { 00:15:22.746 "io_mechanism": "io_uring_cmd", 00:15:22.746 "conserve_cpu": false, 00:15:22.746 "filename": "/dev/ng0n1", 00:15:22.746 "name": "xnvme_bdev" 00:15:22.746 }, 00:15:22.746 "method": "bdev_xnvme_create" 00:15:22.746 }, 00:15:22.746 { 00:15:22.746 "method": "bdev_wait_for_examine" 00:15:22.746 } 00:15:22.746 ] 00:15:22.746 } 00:15:22.746 ] 00:15:22.746 } 00:15:22.746 [2024-12-11 13:56:15.575183] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:22.746 [2024-12-11 13:56:15.575310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73433 ] 00:15:22.746 [2024-12-11 13:56:15.754805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.004 [2024-12-11 13:56:15.864081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.263 Running I/O for 5 seconds... 00:15:25.575 37312.00 IOPS, 145.75 MiB/s [2024-12-11T13:56:19.558Z] 35552.00 IOPS, 138.88 MiB/s [2024-12-11T13:56:20.531Z] 33728.00 IOPS, 131.75 MiB/s [2024-12-11T13:56:21.468Z] 33104.00 IOPS, 129.31 MiB/s [2024-12-11T13:56:21.468Z] 32652.80 IOPS, 127.55 MiB/s 00:15:28.421 Latency(us) 00:15:28.421 [2024-12-11T13:56:21.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.421 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:28.421 xnvme_bdev : 5.01 32632.20 127.47 0.00 0.00 1955.44 1039.63 6579.92 00:15:28.421 [2024-12-11T13:56:21.468Z] =================================================================================================================== 00:15:28.421 [2024-12-11T13:56:21.468Z] Total : 32632.20 127.47 0.00 0.00 1955.44 1039.63 6579.92 00:15:29.359 13:56:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:29.359 13:56:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:29.359 13:56:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:29.359 13:56:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:29.359 13:56:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:29.359 { 00:15:29.359 "subsystems": [ 00:15:29.359 { 00:15:29.359 "subsystem": "bdev", 00:15:29.359 "config": [ 00:15:29.359 { 00:15:29.359 "params": { 00:15:29.359 "io_mechanism": "io_uring_cmd", 00:15:29.359 "conserve_cpu": false, 00:15:29.359 "filename": "/dev/ng0n1", 00:15:29.359 "name": "xnvme_bdev" 00:15:29.359 }, 00:15:29.359 "method": "bdev_xnvme_create" 00:15:29.359 }, 00:15:29.359 { 00:15:29.359 "method": "bdev_wait_for_examine" 00:15:29.359 } 00:15:29.359 ] 00:15:29.359 } 00:15:29.359 ] 00:15:29.359 } 00:15:29.619 [2024-12-11 13:56:22.439698] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:29.619 [2024-12-11 13:56:22.439863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73507 ] 00:15:29.619 [2024-12-11 13:56:22.618892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.879 [2024-12-11 13:56:22.729856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.138 Running I/O for 5 seconds... 00:15:32.014 31680.00 IOPS, 123.75 MiB/s [2024-12-11T13:56:26.448Z] 32640.00 IOPS, 127.50 MiB/s [2024-12-11T13:56:27.400Z] 32576.00 IOPS, 127.25 MiB/s [2024-12-11T13:56:28.337Z] 31488.00 IOPS, 123.00 MiB/s [2024-12-11T13:56:28.337Z] 31808.00 IOPS, 124.25 MiB/s 00:15:35.291 Latency(us) 00:15:35.291 [2024-12-11T13:56:28.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.291 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:35.291 xnvme_bdev : 5.00 31789.86 124.18 0.00 0.00 2007.08 1125.17 5553.45 00:15:35.291 [2024-12-11T13:56:28.338Z] =================================================================================================================== 00:15:35.291 [2024-12-11T13:56:28.338Z] Total : 31789.86 124.18 0.00 0.00 2007.08 1125.17 5553.45 00:15:36.227 13:56:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:36.227 13:56:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:36.227 13:56:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:36.227 13:56:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:36.227 13:56:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:36.227 { 00:15:36.227 "subsystems": [ 00:15:36.227 { 00:15:36.228 "subsystem": "bdev", 00:15:36.228 "config": [ 00:15:36.228 { 00:15:36.228 "params": { 00:15:36.228 "io_mechanism": "io_uring_cmd", 00:15:36.228 "conserve_cpu": false, 00:15:36.228 "filename": "/dev/ng0n1", 00:15:36.228 "name": "xnvme_bdev" 00:15:36.228 }, 00:15:36.228 "method": "bdev_xnvme_create" 00:15:36.228 }, 00:15:36.228 { 00:15:36.228 "method": "bdev_wait_for_examine" 00:15:36.228 } 00:15:36.228 ] 00:15:36.228 } 00:15:36.228 ] 00:15:36.228 } 00:15:36.487 [2024-12-11 13:56:29.297226] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:36.487 [2024-12-11 13:56:29.297409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73587 ] 00:15:36.487 [2024-12-11 13:56:29.494033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.746 [2024-12-11 13:56:29.608726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.005 Running I/O for 5 seconds... 00:15:39.354 71104.00 IOPS, 277.75 MiB/s [2024-12-11T13:56:32.981Z] 71264.00 IOPS, 278.38 MiB/s [2024-12-11T13:56:34.359Z] 71296.00 IOPS, 278.50 MiB/s [2024-12-11T13:56:35.297Z] 71312.00 IOPS, 278.56 MiB/s 00:15:42.250 Latency(us) 00:15:42.250 [2024-12-11T13:56:35.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.250 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:42.250 xnvme_bdev : 5.00 71336.11 278.66 0.00 0.00 894.43 697.47 2421.41 00:15:42.250 [2024-12-11T13:56:35.297Z] =================================================================================================================== 00:15:42.250 [2024-12-11T13:56:35.297Z] Total : 71336.11 278.66 0.00 0.00 894.43 697.47 2421.41 00:15:43.187 13:56:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:43.187 13:56:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:43.187 13:56:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:43.187 13:56:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:43.187 13:56:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:43.187 { 00:15:43.187 "subsystems": [ 00:15:43.187 { 00:15:43.187 "subsystem": "bdev", 00:15:43.187 "config": [ 00:15:43.187 { 00:15:43.187 "params": { 00:15:43.187 "io_mechanism": "io_uring_cmd", 00:15:43.187 "conserve_cpu": false, 00:15:43.187 "filename": "/dev/ng0n1", 00:15:43.187 "name": "xnvme_bdev" 00:15:43.187 }, 00:15:43.187 "method": "bdev_xnvme_create" 00:15:43.187 }, 00:15:43.187 { 00:15:43.187 "method": "bdev_wait_for_examine" 00:15:43.187 } 00:15:43.187 ] 00:15:43.187 } 00:15:43.187 ] 00:15:43.187 } 00:15:43.187 [2024-12-11 13:56:36.186077] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:15:43.187 [2024-12-11 13:56:36.186200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73667 ] 00:15:43.446 [2024-12-11 13:56:36.368056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.446 [2024-12-11 13:56:36.483912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.013 Running I/O for 5 seconds... 00:15:45.886 69144.00 IOPS, 270.09 MiB/s [2024-12-11T13:56:39.870Z] 68355.00 IOPS, 267.01 MiB/s [2024-12-11T13:56:41.249Z] 68340.67 IOPS, 266.96 MiB/s [2024-12-11T13:56:42.187Z] 55796.25 IOPS, 217.95 MiB/s [2024-12-11T13:56:42.187Z] 50813.40 IOPS, 198.49 MiB/s 00:15:49.140 Latency(us) 00:15:49.140 [2024-12-11T13:56:42.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.140 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:49.140 xnvme_bdev : 5.01 50755.53 198.26 0.00 0.00 1257.45 61.69 17897.38 00:15:49.140 [2024-12-11T13:56:42.187Z] =================================================================================================================== 00:15:49.140 [2024-12-11T13:56:42.187Z] Total : 50755.53 198.26 0.00 0.00 1257.45 61.69 17897.38 00:15:50.078 00:15:50.078 real 0m27.454s 00:15:50.078 user 0m14.007s 00:15:50.078 sys 0m13.042s 00:15:50.078 ************************************ 00:15:50.078 END TEST xnvme_bdevperf 00:15:50.078 ************************************ 00:15:50.078 13:56:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.078 13:56:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:50.078 13:56:42 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:50.078 13:56:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:50.078 13:56:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.078 13:56:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.078 ************************************ 00:15:50.078 START TEST xnvme_fio_plugin 00:15:50.078 ************************************ 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:50.078 13:56:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:50.078 { 00:15:50.078 "subsystems": [ 00:15:50.078 { 00:15:50.078 "subsystem": "bdev", 00:15:50.078 "config": [ 00:15:50.078 { 00:15:50.078 "params": { 00:15:50.078 "io_mechanism": "io_uring_cmd", 00:15:50.078 "conserve_cpu": false, 00:15:50.078 "filename": "/dev/ng0n1", 00:15:50.078 "name": "xnvme_bdev" 00:15:50.078 }, 00:15:50.079 "method": "bdev_xnvme_create" 00:15:50.079 }, 00:15:50.079 { 00:15:50.079 "method": "bdev_wait_for_examine" 00:15:50.079 } 00:15:50.079 ] 00:15:50.079 } 00:15:50.079 ] 00:15:50.079 } 00:15:50.338 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:50.338 fio-3.35 00:15:50.338 Starting 1 thread 00:15:56.953 00:15:56.953 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73785: Wed Dec 11 13:56:49 2024 00:15:56.953 read: IOPS=28.1k, BW=110MiB/s (115MB/s)(550MiB/5002msec) 00:15:56.953 slat (nsec): min=4530, max=55121, avg=6421.10, stdev=1882.99 00:15:56.953 clat (usec): min=1522, max=5934, avg=2019.19, stdev=216.48 00:15:56.953 lat (usec): min=1528, max=5940, avg=2025.61, stdev=216.98 00:15:56.953 clat percentiles (usec): 00:15:56.953 | 1.00th=[ 1631], 5.00th=[ 1713], 10.00th=[ 1762], 20.00th=[ 1844], 00:15:56.953 | 30.00th=[ 1893], 40.00th=[ 1958], 50.00th=[ 2008], 60.00th=[ 2057], 00:15:56.953 | 70.00th=[ 2114], 80.00th=[ 2180], 90.00th=[ 2311], 95.00th=[ 2409], 00:15:56.953 | 99.00th=[ 2573], 99.50th=[ 2671], 99.90th=[ 2966], 99.95th=[ 3425], 00:15:56.953 | 99.99th=[ 5014] 00:15:56.953 bw ( KiB/s): min=100864, max=123904, per=99.82%, avg=112394.11, stdev=6567.15, samples=9 00:15:56.953 iops : min=25216, max=30976, avg=28098.44, stdev=1641.82, samples=9 00:15:56.953 lat (msec) : 2=49.78%, 4=50.17%, 10=0.05% 00:15:56.953 cpu : usr=33.75%, sys=65.11%, ctx=14, majf=0, minf=762 00:15:56.953 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:56.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.953 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:56.953 issued rwts: total=140799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.953 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:56.953 00:15:56.953 Run status group 0 (all jobs): 00:15:56.953 READ: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=550MiB (577MB), run=5002-5002msec 00:15:57.523 ----------------------------------------------------- 00:15:57.523 Suppressions used: 00:15:57.523 count bytes template 00:15:57.523 1 11 /usr/src/fio/parse.c 00:15:57.523 1 8 libtcmalloc_minimal.so 00:15:57.523 1 904 libcrypto.so 00:15:57.523 ----------------------------------------------------- 00:15:57.523 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:57.523 13:56:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:57.523 { 00:15:57.523 "subsystems": [ 00:15:57.523 { 00:15:57.523 "subsystem": "bdev", 00:15:57.523 "config": [ 00:15:57.523 { 00:15:57.523 "params": { 00:15:57.523 "io_mechanism": "io_uring_cmd", 00:15:57.523 "conserve_cpu": false, 00:15:57.523 "filename": "/dev/ng0n1", 00:15:57.523 "name": "xnvme_bdev" 00:15:57.523 }, 00:15:57.523 "method": "bdev_xnvme_create" 00:15:57.523 }, 00:15:57.523 { 00:15:57.523 "method": "bdev_wait_for_examine" 00:15:57.523 } 00:15:57.523 ] 00:15:57.523 } 00:15:57.523 ] 00:15:57.523 } 00:15:57.523 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:57.523 fio-3.35 00:15:57.523 Starting 1 thread 00:16:04.097 00:16:04.097 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73878: Wed Dec 11 13:56:56 2024 00:16:04.097 write: IOPS=27.7k, BW=108MiB/s (113MB/s)(541MiB/5002msec); 0 zone resets 00:16:04.097 slat (nsec): min=4817, max=57555, avg=6718.76, stdev=2125.44 00:16:04.097 clat (usec): min=1263, max=5892, avg=2047.62, stdev=243.37 00:16:04.097 lat (usec): min=1275, max=5901, avg=2054.34, stdev=244.04 00:16:04.097 clat percentiles (usec): 00:16:04.097 | 1.00th=[ 1598], 5.00th=[ 1696], 10.00th=[ 1762], 20.00th=[ 1844], 00:16:04.097 | 30.00th=[ 1909], 40.00th=[ 1975], 50.00th=[ 2024], 60.00th=[ 2089], 00:16:04.097 | 70.00th=[ 2147], 80.00th=[ 2245], 90.00th=[ 2343], 95.00th=[ 2442], 00:16:04.097 | 99.00th=[ 2638], 99.50th=[ 2704], 99.90th=[ 2868], 99.95th=[ 3032], 00:16:04.097 | 99.99th=[ 5800] 00:16:04.097 bw ( KiB/s): min=105472, max=117248, per=99.75%, avg=110421.33, stdev=3638.44, samples=9 00:16:04.097 iops : min=26368, max=29312, avg=27605.33, stdev=909.61, samples=9 00:16:04.097 lat (msec) : 2=45.13%, 4=54.83%, 10=0.05% 00:16:04.097 cpu : usr=35.03%, sys=63.87%, ctx=9, majf=0, minf=763 00:16:04.097 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:04.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.097 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:04.097 issued rwts: total=0,138429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.097 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:04.097 00:16:04.097 Run status group 0 (all jobs): 00:16:04.097 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=541MiB (567MB), run=5002-5002msec 00:16:04.665 ----------------------------------------------------- 00:16:04.666 Suppressions used: 00:16:04.666 count bytes template 00:16:04.666 1 11 /usr/src/fio/parse.c 00:16:04.666 1 8 libtcmalloc_minimal.so 00:16:04.666 1 904 libcrypto.so 00:16:04.666 ----------------------------------------------------- 00:16:04.666 00:16:04.666 00:16:04.666 real 0m14.672s 00:16:04.666 user 0m7.134s 00:16:04.666 sys 0m7.182s 00:16:04.666 13:56:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.666 ************************************ 00:16:04.666 END TEST xnvme_fio_plugin 00:16:04.666 ************************************ 00:16:04.666 13:56:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:04.925 13:56:57 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:04.925 13:56:57 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:04.925 13:56:57 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:04.925 13:56:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:04.925 13:56:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:04.925 13:56:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.925 13:56:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.925 ************************************ 00:16:04.925 START TEST xnvme_rpc 00:16:04.925 ************************************ 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73971 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73971 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73971 ']' 00:16:04.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.925 13:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.925 [2024-12-11 13:56:57.864273] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:04.925 [2024-12-11 13:56:57.864403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73971 ] 00:16:05.184 [2024-12-11 13:56:58.045895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.184 [2024-12-11 13:56:58.159101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.122 xnvme_bdev 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.122 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73971 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73971 ']' 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73971 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73971 00:16:06.382 killing process with pid 73971 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73971' 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73971 00:16:06.382 13:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73971 00:16:08.917 00:16:08.917 real 0m3.925s 00:16:08.917 user 0m4.017s 00:16:08.917 sys 0m0.543s 00:16:08.917 ************************************ 00:16:08.917 END TEST xnvme_rpc 00:16:08.917 ************************************ 00:16:08.917 13:57:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.917 13:57:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.917 13:57:01 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:08.917 13:57:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:08.917 13:57:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.917 13:57:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.917 ************************************ 00:16:08.917 START TEST xnvme_bdevperf 00:16:08.917 ************************************ 00:16:08.917 13:57:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:08.917 13:57:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:08.917 13:57:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:08.917 13:57:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:08.917 13:57:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:08.917 13:57:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:08.917 13:57:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:08.917 13:57:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:08.917 { 00:16:08.917 "subsystems": [ 00:16:08.917 { 00:16:08.917 "subsystem": "bdev", 00:16:08.917 "config": [ 00:16:08.917 { 00:16:08.917 "params": { 00:16:08.917 "io_mechanism": "io_uring_cmd", 00:16:08.917 "conserve_cpu": true, 00:16:08.917 "filename": "/dev/ng0n1", 00:16:08.917 "name": "xnvme_bdev" 00:16:08.917 }, 00:16:08.917 "method": "bdev_xnvme_create" 00:16:08.917 }, 00:16:08.917 { 00:16:08.917 "method": "bdev_wait_for_examine" 00:16:08.917 } 00:16:08.917 ] 00:16:08.917 } 00:16:08.917 ] 00:16:08.917 } 00:16:08.917 [2024-12-11 13:57:01.851118] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:08.917 [2024-12-11 13:57:01.851383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74059 ] 00:16:09.177 [2024-12-11 13:57:02.031838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.177 [2024-12-11 13:57:02.143349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.775 Running I/O for 5 seconds... 00:16:11.650 33600.00 IOPS, 131.25 MiB/s [2024-12-11T13:57:05.633Z] 32096.00 IOPS, 125.38 MiB/s [2024-12-11T13:57:06.569Z] 33664.00 IOPS, 131.50 MiB/s [2024-12-11T13:57:07.507Z] 33248.00 IOPS, 129.88 MiB/s 00:16:14.460 Latency(us) 00:16:14.460 [2024-12-11T13:57:07.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.460 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:14.460 xnvme_bdev : 5.00 32923.63 128.61 0.00 0.00 1938.19 980.41 6027.21 00:16:14.460 [2024-12-11T13:57:07.507Z] =================================================================================================================== 00:16:14.460 [2024-12-11T13:57:07.507Z] Total : 32923.63 128.61 0.00 0.00 1938.19 980.41 6027.21 00:16:15.863 13:57:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:15.864 13:57:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:15.864 13:57:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:15.864 13:57:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:15.864 13:57:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:15.864 { 00:16:15.864 "subsystems": [ 00:16:15.864 { 00:16:15.864 "subsystem": "bdev", 00:16:15.864 "config": [ 00:16:15.864 { 00:16:15.864 "params": { 00:16:15.864 "io_mechanism": "io_uring_cmd", 00:16:15.864 "conserve_cpu": true, 00:16:15.864 "filename": "/dev/ng0n1", 00:16:15.864 "name": "xnvme_bdev" 00:16:15.864 }, 00:16:15.864 "method": "bdev_xnvme_create" 00:16:15.864 }, 00:16:15.864 { 00:16:15.864 "method": "bdev_wait_for_examine" 00:16:15.864 } 00:16:15.864 ] 00:16:15.864 } 00:16:15.864 ] 00:16:15.864 } 00:16:15.864 [2024-12-11 13:57:08.695856] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:15.864 [2024-12-11 13:57:08.695972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74133 ] 00:16:15.864 [2024-12-11 13:57:08.876138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.123 [2024-12-11 13:57:08.984213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.382 Running I/O for 5 seconds... 00:16:18.697 30272.00 IOPS, 118.25 MiB/s [2024-12-11T13:57:12.681Z] 30048.00 IOPS, 117.38 MiB/s [2024-12-11T13:57:13.619Z] 30501.33 IOPS, 119.15 MiB/s [2024-12-11T13:57:14.556Z] 30127.25 IOPS, 117.68 MiB/s [2024-12-11T13:57:14.556Z] 30271.40 IOPS, 118.25 MiB/s 00:16:21.509 Latency(us) 00:16:21.509 [2024-12-11T13:57:14.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.509 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:21.509 xnvme_bdev : 5.00 30256.90 118.19 0.00 0.00 2108.77 51.82 15791.81 00:16:21.509 [2024-12-11T13:57:14.556Z] =================================================================================================================== 00:16:21.509 [2024-12-11T13:57:14.556Z] Total : 30256.90 118.19 0.00 0.00 2108.77 51.82 15791.81 00:16:22.446 13:57:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:22.446 13:57:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:22.446 13:57:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:22.446 13:57:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:22.446 13:57:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:22.446 { 00:16:22.446 "subsystems": [ 00:16:22.446 { 00:16:22.446 "subsystem": "bdev", 00:16:22.446 "config": [ 00:16:22.446 { 00:16:22.446 "params": { 00:16:22.446 "io_mechanism": "io_uring_cmd", 00:16:22.446 "conserve_cpu": true, 00:16:22.446 "filename": "/dev/ng0n1", 00:16:22.446 "name": "xnvme_bdev" 00:16:22.446 }, 00:16:22.446 "method": "bdev_xnvme_create" 00:16:22.446 }, 00:16:22.446 { 00:16:22.446 "method": "bdev_wait_for_examine" 00:16:22.446 } 00:16:22.446 ] 00:16:22.446 } 00:16:22.446 ] 00:16:22.446 } 00:16:22.704 [2024-12-11 13:57:15.526209] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:22.704 [2024-12-11 13:57:15.526474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74213 ] 00:16:22.704 [2024-12-11 13:57:15.706137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.966 [2024-12-11 13:57:15.818104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.224 Running I/O for 5 seconds... 00:16:25.537 71296.00 IOPS, 278.50 MiB/s [2024-12-11T13:57:19.522Z] 71136.00 IOPS, 277.88 MiB/s [2024-12-11T13:57:20.460Z] 71232.00 IOPS, 278.25 MiB/s [2024-12-11T13:57:21.397Z] 71232.00 IOPS, 278.25 MiB/s [2024-12-11T13:57:21.397Z] 71296.00 IOPS, 278.50 MiB/s 00:16:28.350 Latency(us) 00:16:28.350 [2024-12-11T13:57:21.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.350 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:28.350 xnvme_bdev : 5.00 71276.61 278.42 0.00 0.00 895.24 628.38 2421.41 00:16:28.350 [2024-12-11T13:57:21.397Z] =================================================================================================================== 00:16:28.350 [2024-12-11T13:57:21.397Z] Total : 71276.61 278.42 0.00 0.00 895.24 628.38 2421.41 00:16:29.287 13:57:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:29.287 13:57:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:29.287 13:57:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:29.287 13:57:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:29.287 13:57:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:29.287 { 00:16:29.287 "subsystems": [ 00:16:29.287 { 00:16:29.287 "subsystem": "bdev", 00:16:29.287 "config": [ 00:16:29.287 { 00:16:29.287 "params": { 00:16:29.287 "io_mechanism": "io_uring_cmd", 00:16:29.287 "conserve_cpu": true, 00:16:29.287 "filename": "/dev/ng0n1", 00:16:29.287 "name": "xnvme_bdev" 00:16:29.287 }, 00:16:29.287 "method": "bdev_xnvme_create" 00:16:29.287 }, 00:16:29.287 { 00:16:29.287 "method": "bdev_wait_for_examine" 00:16:29.287 } 00:16:29.287 ] 00:16:29.287 } 00:16:29.287 ] 00:16:29.287 } 00:16:29.546 [2024-12-11 13:57:22.354172] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:29.546 [2024-12-11 13:57:22.354290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74288 ] 00:16:29.546 [2024-12-11 13:57:22.530874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.804 [2024-12-11 13:57:22.640685] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.063 Running I/O for 5 seconds... 00:16:32.378 37100.00 IOPS, 144.92 MiB/s [2024-12-11T13:57:25.993Z] 35414.00 IOPS, 138.34 MiB/s [2024-12-11T13:57:27.369Z] 35068.67 IOPS, 136.99 MiB/s [2024-12-11T13:57:28.304Z] 35763.00 IOPS, 139.70 MiB/s 00:16:35.257 Latency(us) 00:16:35.257 [2024-12-11T13:57:28.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.257 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:35.257 xnvme_bdev : 5.00 35782.46 139.78 0.00 0.00 1782.01 389.86 15791.81 00:16:35.257 [2024-12-11T13:57:28.304Z] =================================================================================================================== 00:16:35.257 [2024-12-11T13:57:28.304Z] Total : 35782.46 139.78 0.00 0.00 1782.01 389.86 15791.81 00:16:36.194 00:16:36.194 real 0m27.311s 00:16:36.194 user 0m17.676s 00:16:36.194 sys 0m8.237s 00:16:36.194 13:57:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.194 13:57:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:36.194 ************************************ 00:16:36.194 END TEST xnvme_bdevperf 00:16:36.194 ************************************ 00:16:36.194 13:57:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:36.194 13:57:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:36.194 13:57:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.194 13:57:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:36.194 ************************************ 00:16:36.194 START TEST xnvme_fio_plugin 00:16:36.194 ************************************ 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:36.194 13:57:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:36.194 { 00:16:36.194 "subsystems": [ 00:16:36.194 { 00:16:36.194 "subsystem": "bdev", 00:16:36.194 "config": [ 00:16:36.194 { 00:16:36.194 "params": { 00:16:36.194 "io_mechanism": "io_uring_cmd", 00:16:36.194 "conserve_cpu": true, 00:16:36.194 "filename": "/dev/ng0n1", 00:16:36.194 "name": "xnvme_bdev" 00:16:36.194 }, 00:16:36.194 "method": "bdev_xnvme_create" 00:16:36.194 }, 00:16:36.194 { 00:16:36.194 "method": "bdev_wait_for_examine" 00:16:36.194 } 00:16:36.194 ] 00:16:36.194 } 00:16:36.194 ] 00:16:36.194 } 00:16:36.453 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:36.453 fio-3.35 00:16:36.453 Starting 1 thread 00:16:43.025 00:16:43.025 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74407: Wed Dec 11 13:57:35 2024 00:16:43.025 read: IOPS=30.1k, BW=118MiB/s (123MB/s)(588MiB/5001msec) 00:16:43.025 slat (nsec): min=2340, max=59241, avg=6037.54, stdev=2219.16 00:16:43.025 clat (usec): min=861, max=3864, avg=1888.18, stdev=318.95 00:16:43.025 lat (usec): min=864, max=3871, avg=1894.22, stdev=320.10 00:16:43.025 clat percentiles (usec): 00:16:43.025 | 1.00th=[ 1037], 5.00th=[ 1319], 10.00th=[ 1565], 20.00th=[ 1663], 00:16:43.025 | 30.00th=[ 1745], 40.00th=[ 1811], 50.00th=[ 1876], 60.00th=[ 1942], 00:16:43.025 | 70.00th=[ 2024], 80.00th=[ 2147], 90.00th=[ 2311], 95.00th=[ 2409], 00:16:43.025 | 99.00th=[ 2638], 99.50th=[ 2704], 99.90th=[ 3097], 99.95th=[ 3425], 00:16:43.025 | 99.99th=[ 3785] 00:16:43.025 bw ( KiB/s): min=108032, max=126464, per=99.27%, avg=119523.56, stdev=5196.93, samples=9 00:16:43.025 iops : min=27008, max=31616, avg=29880.89, stdev=1299.23, samples=9 00:16:43.025 lat (usec) : 1000=0.57% 00:16:43.025 lat (msec) : 2=66.59%, 4=32.85% 00:16:43.025 cpu : usr=50.30%, sys=47.04%, ctx=8, majf=0, minf=762 00:16:43.025 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:43.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.025 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:43.025 issued rwts: total=150528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.025 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:43.025 00:16:43.025 Run status group 0 (all jobs): 00:16:43.025 READ: bw=118MiB/s (123MB/s), 118MiB/s-118MiB/s (123MB/s-123MB/s), io=588MiB (617MB), run=5001-5001msec 00:16:43.593 ----------------------------------------------------- 00:16:43.593 Suppressions used: 00:16:43.593 count bytes template 00:16:43.593 1 11 /usr/src/fio/parse.c 00:16:43.593 1 8 libtcmalloc_minimal.so 00:16:43.593 1 904 libcrypto.so 00:16:43.593 ----------------------------------------------------- 00:16:43.593 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:43.593 { 00:16:43.593 "subsystems": [ 00:16:43.593 { 00:16:43.593 "subsystem": "bdev", 00:16:43.593 "config": [ 00:16:43.593 { 00:16:43.593 "params": { 00:16:43.593 "io_mechanism": "io_uring_cmd", 00:16:43.593 "conserve_cpu": true, 00:16:43.593 "filename": "/dev/ng0n1", 00:16:43.593 "name": "xnvme_bdev" 00:16:43.593 }, 00:16:43.593 "method": "bdev_xnvme_create" 00:16:43.593 }, 00:16:43.593 { 00:16:43.593 "method": "bdev_wait_for_examine" 00:16:43.593 } 00:16:43.593 ] 00:16:43.593 } 00:16:43.593 ] 00:16:43.593 } 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:43.593 13:57:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:43.852 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:43.852 fio-3.35 00:16:43.852 Starting 1 thread 00:16:50.491 00:16:50.491 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74503: Wed Dec 11 13:57:42 2024 00:16:50.492 write: IOPS=32.6k, BW=127MiB/s (134MB/s)(637MiB/5002msec); 0 zone resets 00:16:50.492 slat (usec): min=2, max=117, avg= 5.63, stdev= 2.39 00:16:50.492 clat (usec): min=797, max=3410, avg=1740.52, stdev=425.78 00:16:50.492 lat (usec): min=800, max=3421, avg=1746.15, stdev=427.30 00:16:50.492 clat percentiles (usec): 00:16:50.492 | 1.00th=[ 889], 5.00th=[ 963], 10.00th=[ 1037], 20.00th=[ 1434], 00:16:50.492 | 30.00th=[ 1582], 40.00th=[ 1680], 50.00th=[ 1762], 60.00th=[ 1860], 00:16:50.492 | 70.00th=[ 1958], 80.00th=[ 2089], 90.00th=[ 2278], 95.00th=[ 2409], 00:16:50.492 | 99.00th=[ 2606], 99.50th=[ 2671], 99.90th=[ 2868], 99.95th=[ 2933], 00:16:50.492 | 99.99th=[ 3261] 00:16:50.492 bw ( KiB/s): min=107520, max=213077, per=100.00%, avg=132138.00, stdev=31905.27, samples=9 00:16:50.492 iops : min=26880, max=53269, avg=33034.44, stdev=7976.26, samples=9 00:16:50.492 lat (usec) : 1000=7.73% 00:16:50.492 lat (msec) : 2=65.43%, 4=26.85% 00:16:50.492 cpu : usr=51.69%, sys=45.67%, ctx=44, majf=0, minf=763 00:16:50.492 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:50.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.492 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:50.492 issued rwts: total=0,163072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.492 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.492 00:16:50.492 Run status group 0 (all jobs): 00:16:50.492 WRITE: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=637MiB (668MB), run=5002-5002msec 00:16:50.751 ----------------------------------------------------- 00:16:50.751 Suppressions used: 00:16:50.751 count bytes template 00:16:50.751 1 11 /usr/src/fio/parse.c 00:16:50.751 1 8 libtcmalloc_minimal.so 00:16:50.751 1 904 libcrypto.so 00:16:50.751 ----------------------------------------------------- 00:16:50.751 00:16:50.751 00:16:50.751 real 0m14.609s 00:16:50.751 user 0m8.720s 00:16:50.751 sys 0m5.366s 00:16:50.751 ************************************ 00:16:50.751 END TEST xnvme_fio_plugin 00:16:50.751 ************************************ 00:16:50.751 13:57:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.752 13:57:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:51.011 13:57:43 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73971 00:16:51.011 13:57:43 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73971 ']' 00:16:51.011 13:57:43 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73971 00:16:51.011 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73971) - No such process 00:16:51.011 Process with pid 73971 is not found 00:16:51.011 13:57:43 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73971 is not found' 00:16:51.011 ************************************ 00:16:51.011 END TEST nvme_xnvme 00:16:51.011 ************************************ 00:16:51.011 00:16:51.011 real 3m50.547s 00:16:51.011 user 2m5.050s 00:16:51.011 sys 1m28.543s 00:16:51.011 13:57:43 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.011 13:57:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:51.011 13:57:43 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:51.011 13:57:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:51.011 13:57:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.011 13:57:43 -- common/autotest_common.sh@10 -- # set +x 00:16:51.011 ************************************ 00:16:51.011 START TEST blockdev_xnvme 00:16:51.011 ************************************ 00:16:51.011 13:57:43 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:51.011 * Looking for test storage... 00:16:51.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:51.011 13:57:44 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:51.011 13:57:44 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:51.011 13:57:44 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:51.271 13:57:44 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.271 13:57:44 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:51.271 13:57:44 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.271 13:57:44 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:51.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.271 --rc genhtml_branch_coverage=1 00:16:51.271 --rc genhtml_function_coverage=1 00:16:51.271 --rc genhtml_legend=1 00:16:51.271 --rc geninfo_all_blocks=1 00:16:51.271 --rc geninfo_unexecuted_blocks=1 00:16:51.271 00:16:51.271 ' 00:16:51.271 13:57:44 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:51.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.271 --rc genhtml_branch_coverage=1 00:16:51.271 --rc genhtml_function_coverage=1 00:16:51.271 --rc genhtml_legend=1 00:16:51.271 --rc geninfo_all_blocks=1 00:16:51.271 --rc geninfo_unexecuted_blocks=1 00:16:51.271 00:16:51.271 ' 00:16:51.271 13:57:44 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:51.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.271 --rc genhtml_branch_coverage=1 00:16:51.271 --rc genhtml_function_coverage=1 00:16:51.271 --rc genhtml_legend=1 00:16:51.271 --rc geninfo_all_blocks=1 00:16:51.271 --rc geninfo_unexecuted_blocks=1 00:16:51.271 00:16:51.271 ' 00:16:51.271 13:57:44 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:51.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.271 --rc genhtml_branch_coverage=1 00:16:51.271 --rc genhtml_function_coverage=1 00:16:51.271 --rc genhtml_legend=1 00:16:51.271 --rc geninfo_all_blocks=1 00:16:51.271 --rc geninfo_unexecuted_blocks=1 00:16:51.271 00:16:51.271 ' 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74643 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:51.271 13:57:44 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74643 00:16:51.271 13:57:44 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74643 ']' 00:16:51.271 13:57:44 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.271 13:57:44 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.272 13:57:44 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.272 13:57:44 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.272 13:57:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:51.272 [2024-12-11 13:57:44.235403] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:51.272 [2024-12-11 13:57:44.235787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74643 ] 00:16:51.532 [2024-12-11 13:57:44.418803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.532 [2024-12-11 13:57:44.523393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.470 13:57:45 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.470 13:57:45 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:16:52.470 13:57:45 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:16:52.470 13:57:45 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:16:52.470 13:57:45 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:52.470 13:57:45 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:52.470 13:57:45 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:53.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:53.606 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:53.865 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:53.865 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:53.865 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:53.865 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:53.865 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:53.865 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:53.865 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:16:53.865 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:16:53.865 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:16:53.865 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:53.865 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:16:53.865 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:53.865 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:16:53.865 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:16:53.866 nvme0n1 00:16:53.866 nvme0n2 00:16:53.866 nvme0n3 00:16:53.866 nvme1n1 00:16:53.866 nvme2n1 00:16:53.866 nvme3n1 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.866 13:57:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.866 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:16:54.126 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:16:54.126 13:57:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.126 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:16:54.126 13:57:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.126 13:57:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.126 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:16:54.126 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:16:54.126 13:57:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e55642cf-39b0-4a2e-ad72-0dd0d8f97acd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e55642cf-39b0-4a2e-ad72-0dd0d8f97acd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "b6f781a5-584d-445f-8bbf-a772cde21ef6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b6f781a5-584d-445f-8bbf-a772cde21ef6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "28f7078b-c804-4788-a0fd-ea1063f6643e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "28f7078b-c804-4788-a0fd-ea1063f6643e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "795b2f85-4301-419b-9185-09b8202652cd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "795b2f85-4301-419b-9185-09b8202652cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "20d92a31-cc57-44dd-b4d5-1268d1803c26"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "20d92a31-cc57-44dd-b4d5-1268d1803c26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "fb6b0aa5-df29-4a56-9f51-b52eace41758"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fb6b0aa5-df29-4a56-9f51-b52eace41758",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:54.127 13:57:47 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:16:54.127 13:57:47 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:16:54.127 13:57:47 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:16:54.127 13:57:47 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74643 00:16:54.127 13:57:47 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74643 ']' 00:16:54.127 13:57:47 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74643 00:16:54.127 13:57:47 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:54.127 13:57:47 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.127 13:57:47 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74643 00:16:54.127 killing process with pid 74643 00:16:54.127 13:57:47 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.127 13:57:47 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.127 13:57:47 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74643' 00:16:54.127 13:57:47 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74643 00:16:54.127 13:57:47 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74643 00:16:56.661 13:57:49 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:56.661 13:57:49 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:56.661 13:57:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:56.661 13:57:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.661 13:57:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:56.661 ************************************ 00:16:56.661 START TEST bdev_hello_world 00:16:56.661 ************************************ 00:16:56.661 13:57:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:56.661 [2024-12-11 13:57:49.565514] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:56.661 [2024-12-11 13:57:49.565649] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74934 ] 00:16:56.931 [2024-12-11 13:57:49.747271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.931 [2024-12-11 13:57:49.861202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.518 [2024-12-11 13:57:50.306919] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:57.518 [2024-12-11 13:57:50.306966] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:57.518 [2024-12-11 13:57:50.306985] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:57.518 [2024-12-11 13:57:50.309057] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:57.518 [2024-12-11 13:57:50.309311] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:57.518 [2024-12-11 13:57:50.309333] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:57.518 [2024-12-11 13:57:50.309556] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:57.518 00:16:57.518 [2024-12-11 13:57:50.309576] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:58.455 00:16:58.455 real 0m1.943s 00:16:58.455 ************************************ 00:16:58.455 END TEST bdev_hello_world 00:16:58.455 ************************************ 00:16:58.455 user 0m1.585s 00:16:58.455 sys 0m0.242s 00:16:58.455 13:57:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.455 13:57:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:58.455 13:57:51 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:16:58.455 13:57:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:58.455 13:57:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.456 13:57:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.456 ************************************ 00:16:58.456 START TEST bdev_bounds 00:16:58.456 ************************************ 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:58.456 Process bdevio pid: 74976 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74976 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74976' 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74976 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74976 ']' 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.456 13:57:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:58.714 [2024-12-11 13:57:51.582765] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:16:58.714 [2024-12-11 13:57:51.582906] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74976 ] 00:16:58.714 [2024-12-11 13:57:51.752708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:58.972 [2024-12-11 13:57:51.864542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.972 [2024-12-11 13:57:51.864672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.972 [2024-12-11 13:57:51.864716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.541 13:57:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.541 13:57:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:59.541 13:57:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:59.541 I/O targets: 00:16:59.541 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:59.541 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:59.541 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:59.541 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:59.541 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:59.541 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:59.541 00:16:59.541 00:16:59.541 CUnit - A unit testing framework for C - Version 2.1-3 00:16:59.541 http://cunit.sourceforge.net/ 00:16:59.541 00:16:59.541 00:16:59.541 Suite: bdevio tests on: nvme3n1 00:16:59.541 Test: blockdev write read block ...passed 00:16:59.541 Test: blockdev write zeroes read block ...passed 00:16:59.541 Test: blockdev write zeroes read no split ...passed 00:16:59.541 Test: blockdev write zeroes read split ...passed 00:16:59.541 Test: blockdev write zeroes read split partial ...passed 00:16:59.541 Test: blockdev reset ...passed 00:16:59.541 Test: blockdev write read 8 blocks ...passed 00:16:59.541 Test: blockdev write read size > 128k ...passed 00:16:59.541 Test: blockdev write read invalid size ...passed 00:16:59.541 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:59.541 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:59.541 Test: blockdev write read max offset ...passed 00:16:59.541 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:59.541 Test: blockdev writev readv 8 blocks ...passed 00:16:59.541 Test: blockdev writev readv 30 x 1block ...passed 00:16:59.541 Test: blockdev writev readv block ...passed 00:16:59.541 Test: blockdev writev readv size > 128k ...passed 00:16:59.541 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:59.800 Test: blockdev comparev and writev ...passed 00:16:59.800 Test: blockdev nvme passthru rw ...passed 00:16:59.800 Test: blockdev nvme passthru vendor specific ...passed 00:16:59.800 Test: blockdev nvme admin passthru ...passed 00:16:59.800 Test: blockdev copy ...passed 00:16:59.800 Suite: bdevio tests on: nvme2n1 00:16:59.800 Test: blockdev write read block ...passed 00:16:59.800 Test: blockdev write zeroes read block ...passed 00:16:59.800 Test: blockdev write zeroes read no split ...passed 00:16:59.800 Test: blockdev write zeroes read split ...passed 00:16:59.800 Test: blockdev write zeroes read split partial ...passed 00:16:59.800 Test: blockdev reset ...passed 00:16:59.800 Test: blockdev write read 8 blocks ...passed 00:16:59.800 Test: blockdev write read size > 128k ...passed 00:16:59.800 Test: blockdev write read invalid size ...passed 00:16:59.800 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:59.800 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:59.800 Test: blockdev write read max offset ...passed 00:16:59.800 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:59.800 Test: blockdev writev readv 8 blocks ...passed 00:16:59.801 Test: blockdev writev readv 30 x 1block ...passed 00:16:59.801 Test: blockdev writev readv block ...passed 00:16:59.801 Test: blockdev writev readv size > 128k ...passed 00:16:59.801 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:59.801 Test: blockdev comparev and writev ...passed 00:16:59.801 Test: blockdev nvme passthru rw ...passed 00:16:59.801 Test: blockdev nvme passthru vendor specific ...passed 00:16:59.801 Test: blockdev nvme admin passthru ...passed 00:16:59.801 Test: blockdev copy ...passed 00:16:59.801 Suite: bdevio tests on: nvme1n1 00:16:59.801 Test: blockdev write read block ...passed 00:16:59.801 Test: blockdev write zeroes read block ...passed 00:16:59.801 Test: blockdev write zeroes read no split ...passed 00:16:59.801 Test: blockdev write zeroes read split ...passed 00:16:59.801 Test: blockdev write zeroes read split partial ...passed 00:16:59.801 Test: blockdev reset ...passed 00:16:59.801 Test: blockdev write read 8 blocks ...passed 00:16:59.801 Test: blockdev write read size > 128k ...passed 00:16:59.801 Test: blockdev write read invalid size ...passed 00:16:59.801 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:59.801 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:59.801 Test: blockdev write read max offset ...passed 00:16:59.801 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:59.801 Test: blockdev writev readv 8 blocks ...passed 00:16:59.801 Test: blockdev writev readv 30 x 1block ...passed 00:16:59.801 Test: blockdev writev readv block ...passed 00:16:59.801 Test: blockdev writev readv size > 128k ...passed 00:16:59.801 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:59.801 Test: blockdev comparev and writev ...passed 00:16:59.801 Test: blockdev nvme passthru rw ...passed 00:16:59.801 Test: blockdev nvme passthru vendor specific ...passed 00:16:59.801 Test: blockdev nvme admin passthru ...passed 00:16:59.801 Test: blockdev copy ...passed 00:16:59.801 Suite: bdevio tests on: nvme0n3 00:16:59.801 Test: blockdev write read block ...passed 00:16:59.801 Test: blockdev write zeroes read block ...passed 00:16:59.801 Test: blockdev write zeroes read no split ...passed 00:17:00.060 Test: blockdev write zeroes read split ...passed 00:17:00.060 Test: blockdev write zeroes read split partial ...passed 00:17:00.060 Test: blockdev reset ...passed 00:17:00.060 Test: blockdev write read 8 blocks ...passed 00:17:00.060 Test: blockdev write read size > 128k ...passed 00:17:00.060 Test: blockdev write read invalid size ...passed 00:17:00.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:00.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:00.060 Test: blockdev write read max offset ...passed 00:17:00.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:00.060 Test: blockdev writev readv 8 blocks ...passed 00:17:00.060 Test: blockdev writev readv 30 x 1block ...passed 00:17:00.060 Test: blockdev writev readv block ...passed 00:17:00.060 Test: blockdev writev readv size > 128k ...passed 00:17:00.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:00.060 Test: blockdev comparev and writev ...passed 00:17:00.060 Test: blockdev nvme passthru rw ...passed 00:17:00.060 Test: blockdev nvme passthru vendor specific ...passed 00:17:00.060 Test: blockdev nvme admin passthru ...passed 00:17:00.060 Test: blockdev copy ...passed 00:17:00.060 Suite: bdevio tests on: nvme0n2 00:17:00.060 Test: blockdev write read block ...passed 00:17:00.060 Test: blockdev write zeroes read block ...passed 00:17:00.060 Test: blockdev write zeroes read no split ...passed 00:17:00.060 Test: blockdev write zeroes read split ...passed 00:17:00.060 Test: blockdev write zeroes read split partial ...passed 00:17:00.060 Test: blockdev reset ...passed 00:17:00.060 Test: blockdev write read 8 blocks ...passed 00:17:00.060 Test: blockdev write read size > 128k ...passed 00:17:00.060 Test: blockdev write read invalid size ...passed 00:17:00.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:00.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:00.060 Test: blockdev write read max offset ...passed 00:17:00.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:00.060 Test: blockdev writev readv 8 blocks ...passed 00:17:00.060 Test: blockdev writev readv 30 x 1block ...passed 00:17:00.060 Test: blockdev writev readv block ...passed 00:17:00.060 Test: blockdev writev readv size > 128k ...passed 00:17:00.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:00.060 Test: blockdev comparev and writev ...passed 00:17:00.060 Test: blockdev nvme passthru rw ...passed 00:17:00.060 Test: blockdev nvme passthru vendor specific ...passed 00:17:00.060 Test: blockdev nvme admin passthru ...passed 00:17:00.060 Test: blockdev copy ...passed 00:17:00.060 Suite: bdevio tests on: nvme0n1 00:17:00.060 Test: blockdev write read block ...passed 00:17:00.060 Test: blockdev write zeroes read block ...passed 00:17:00.060 Test: blockdev write zeroes read no split ...passed 00:17:00.060 Test: blockdev write zeroes read split ...passed 00:17:00.060 Test: blockdev write zeroes read split partial ...passed 00:17:00.060 Test: blockdev reset ...passed 00:17:00.060 Test: blockdev write read 8 blocks ...passed 00:17:00.060 Test: blockdev write read size > 128k ...passed 00:17:00.060 Test: blockdev write read invalid size ...passed 00:17:00.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:00.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:00.060 Test: blockdev write read max offset ...passed 00:17:00.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:00.060 Test: blockdev writev readv 8 blocks ...passed 00:17:00.060 Test: blockdev writev readv 30 x 1block ...passed 00:17:00.060 Test: blockdev writev readv block ...passed 00:17:00.060 Test: blockdev writev readv size > 128k ...passed 00:17:00.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:00.061 Test: blockdev comparev and writev ...passed 00:17:00.061 Test: blockdev nvme passthru rw ...passed 00:17:00.061 Test: blockdev nvme passthru vendor specific ...passed 00:17:00.061 Test: blockdev nvme admin passthru ...passed 00:17:00.061 Test: blockdev copy ...passed 00:17:00.061 00:17:00.061 Run Summary: Type Total Ran Passed Failed Inactive 00:17:00.061 suites 6 6 n/a 0 0 00:17:00.061 tests 138 138 138 0 0 00:17:00.061 asserts 780 780 780 0 n/a 00:17:00.061 00:17:00.061 Elapsed time = 1.544 seconds 00:17:00.061 0 00:17:00.061 13:57:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74976 00:17:00.061 13:57:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74976 ']' 00:17:00.061 13:57:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74976 00:17:00.061 13:57:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:00.061 13:57:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.061 13:57:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74976 00:17:00.320 killing process with pid 74976 00:17:00.320 13:57:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.320 13:57:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.320 13:57:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74976' 00:17:00.320 13:57:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74976 00:17:00.320 13:57:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74976 00:17:01.257 ************************************ 00:17:01.257 END TEST bdev_bounds 00:17:01.257 ************************************ 00:17:01.257 13:57:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:01.257 00:17:01.257 real 0m2.758s 00:17:01.257 user 0m6.908s 00:17:01.257 sys 0m0.393s 00:17:01.257 13:57:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.257 13:57:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:01.517 13:57:54 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:01.517 13:57:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:01.517 13:57:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.517 13:57:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:01.517 ************************************ 00:17:01.517 START TEST bdev_nbd 00:17:01.517 ************************************ 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75037 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75037 /var/tmp/spdk-nbd.sock 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 75037 ']' 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:01.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.517 13:57:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:01.517 [2024-12-11 13:57:54.426142] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:01.517 [2024-12-11 13:57:54.426459] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.776 [2024-12-11 13:57:54.609584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.776 [2024-12-11 13:57:54.719450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:02.343 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.601 1+0 records in 00:17:02.601 1+0 records out 00:17:02.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665013 s, 6.2 MB/s 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:02.601 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.860 1+0 records in 00:17:02.860 1+0 records out 00:17:02.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697631 s, 5.9 MB/s 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:02.860 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:17:03.119 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:03.119 13:57:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.119 1+0 records in 00:17:03.119 1+0 records out 00:17:03.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000764337 s, 5.4 MB/s 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:03.119 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.378 1+0 records in 00:17:03.378 1+0 records out 00:17:03.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688679 s, 5.9 MB/s 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:03.378 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.648 1+0 records in 00:17:03.648 1+0 records out 00:17:03.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799408 s, 5.1 MB/s 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:03.648 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.907 1+0 records in 00:17:03.907 1+0 records out 00:17:03.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584749 s, 7.0 MB/s 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:03.907 13:57:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:04.166 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:04.166 { 00:17:04.166 "nbd_device": "/dev/nbd0", 00:17:04.166 "bdev_name": "nvme0n1" 00:17:04.166 }, 00:17:04.166 { 00:17:04.166 "nbd_device": "/dev/nbd1", 00:17:04.166 "bdev_name": "nvme0n2" 00:17:04.166 }, 00:17:04.166 { 00:17:04.166 "nbd_device": "/dev/nbd2", 00:17:04.167 "bdev_name": "nvme0n3" 00:17:04.167 }, 00:17:04.167 { 00:17:04.167 "nbd_device": "/dev/nbd3", 00:17:04.167 "bdev_name": "nvme1n1" 00:17:04.167 }, 00:17:04.167 { 00:17:04.167 "nbd_device": "/dev/nbd4", 00:17:04.167 "bdev_name": "nvme2n1" 00:17:04.167 }, 00:17:04.167 { 00:17:04.167 "nbd_device": "/dev/nbd5", 00:17:04.167 "bdev_name": "nvme3n1" 00:17:04.167 } 00:17:04.167 ]' 00:17:04.167 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:04.167 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:04.167 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:04.167 { 00:17:04.167 "nbd_device": "/dev/nbd0", 00:17:04.167 "bdev_name": "nvme0n1" 00:17:04.167 }, 00:17:04.167 { 00:17:04.167 "nbd_device": "/dev/nbd1", 00:17:04.167 "bdev_name": "nvme0n2" 00:17:04.167 }, 00:17:04.167 { 00:17:04.167 "nbd_device": "/dev/nbd2", 00:17:04.167 "bdev_name": "nvme0n3" 00:17:04.167 }, 00:17:04.167 { 00:17:04.167 "nbd_device": "/dev/nbd3", 00:17:04.167 "bdev_name": "nvme1n1" 00:17:04.167 }, 00:17:04.167 { 00:17:04.167 "nbd_device": "/dev/nbd4", 00:17:04.167 "bdev_name": "nvme2n1" 00:17:04.167 }, 00:17:04.167 { 00:17:04.167 "nbd_device": "/dev/nbd5", 00:17:04.167 "bdev_name": "nvme3n1" 00:17:04.167 } 00:17:04.167 ]' 00:17:04.167 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:04.167 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.167 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:04.167 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:04.167 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:04.167 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.167 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:04.426 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:04.426 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:04.426 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:04.426 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.426 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.426 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:04.426 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:04.426 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.426 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.426 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:04.684 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.685 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:04.944 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:04.944 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:04.944 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:04.944 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.944 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.944 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:04.944 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:04.944 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.944 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.944 13:57:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:05.203 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:05.203 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:05.203 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:05.203 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.203 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.203 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:05.203 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:05.203 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.203 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.203 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.461 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:05.720 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:05.979 /dev/nbd0 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.979 1+0 records in 00:17:05.979 1+0 records out 00:17:05.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548583 s, 7.5 MB/s 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:05.979 13:57:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:17:06.238 /dev/nbd1 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.238 1+0 records in 00:17:06.238 1+0 records out 00:17:06.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695709 s, 5.9 MB/s 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:06.238 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:17:06.496 /dev/nbd10 00:17:06.496 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:06.496 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:06.496 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:17:06.496 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.496 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.496 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.496 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:17:06.496 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.496 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.496 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.497 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.497 1+0 records in 00:17:06.497 1+0 records out 00:17:06.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555775 s, 7.4 MB/s 00:17:06.497 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.497 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.497 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.497 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.497 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.497 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.497 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:06.497 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:17:06.755 /dev/nbd11 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.755 1+0 records in 00:17:06.755 1+0 records out 00:17:06.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070344 s, 5.8 MB/s 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:06.755 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:17:07.013 /dev/nbd12 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.013 1+0 records in 00:17:07.013 1+0 records out 00:17:07.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000915079 s, 4.5 MB/s 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:07.013 13:57:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:07.272 /dev/nbd13 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.272 1+0 records in 00:17:07.272 1+0 records out 00:17:07.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000782809 s, 5.2 MB/s 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:07.272 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:07.530 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd0", 00:17:07.530 "bdev_name": "nvme0n1" 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd1", 00:17:07.530 "bdev_name": "nvme0n2" 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd10", 00:17:07.530 "bdev_name": "nvme0n3" 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd11", 00:17:07.530 "bdev_name": "nvme1n1" 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd12", 00:17:07.530 "bdev_name": "nvme2n1" 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd13", 00:17:07.530 "bdev_name": "nvme3n1" 00:17:07.530 } 00:17:07.530 ]' 00:17:07.530 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd0", 00:17:07.530 "bdev_name": "nvme0n1" 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd1", 00:17:07.530 "bdev_name": "nvme0n2" 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd10", 00:17:07.530 "bdev_name": "nvme0n3" 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd11", 00:17:07.530 "bdev_name": "nvme1n1" 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd12", 00:17:07.530 "bdev_name": "nvme2n1" 00:17:07.530 }, 00:17:07.530 { 00:17:07.530 "nbd_device": "/dev/nbd13", 00:17:07.530 "bdev_name": "nvme3n1" 00:17:07.530 } 00:17:07.530 ]' 00:17:07.530 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:07.530 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:07.530 /dev/nbd1 00:17:07.530 /dev/nbd10 00:17:07.530 /dev/nbd11 00:17:07.530 /dev/nbd12 00:17:07.530 /dev/nbd13' 00:17:07.530 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:07.530 /dev/nbd1 00:17:07.530 /dev/nbd10 00:17:07.530 /dev/nbd11 00:17:07.530 /dev/nbd12 00:17:07.530 /dev/nbd13' 00:17:07.530 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:07.530 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:07.530 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:07.530 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:07.530 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:07.531 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:07.531 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:07.531 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:07.531 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:07.531 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:07.531 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:07.531 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:07.531 256+0 records in 00:17:07.531 256+0 records out 00:17:07.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135013 s, 77.7 MB/s 00:17:07.531 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.531 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:07.531 256+0 records in 00:17:07.531 256+0 records out 00:17:07.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12361 s, 8.5 MB/s 00:17:07.789 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.789 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:07.789 256+0 records in 00:17:07.789 256+0 records out 00:17:07.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12844 s, 8.2 MB/s 00:17:07.789 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.789 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:08.048 256+0 records in 00:17:08.048 256+0 records out 00:17:08.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124256 s, 8.4 MB/s 00:17:08.048 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:08.048 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:08.048 256+0 records in 00:17:08.048 256+0 records out 00:17:08.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139287 s, 7.5 MB/s 00:17:08.048 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:08.048 13:58:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:08.306 256+0 records in 00:17:08.306 256+0 records out 00:17:08.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150105 s, 7.0 MB/s 00:17:08.306 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:08.306 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:08.306 256+0 records in 00:17:08.306 256+0 records out 00:17:08.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123721 s, 8.5 MB/s 00:17:08.306 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:08.306 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.307 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:08.565 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.565 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.565 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.565 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.565 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.565 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.565 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.565 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.565 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.565 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:08.824 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:08.824 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:08.824 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:08.824 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.824 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.824 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:08.824 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.824 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.824 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.824 13:58:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:09.083 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:09.083 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:09.083 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:09.083 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.083 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.083 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:09.083 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.083 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.083 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.083 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:09.342 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:09.342 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:09.342 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:09.342 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.342 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.342 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:09.342 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.342 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.342 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.342 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:09.600 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:09.600 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:09.600 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:09.600 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.600 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.600 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:09.600 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.600 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.600 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.600 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:09.859 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:09.859 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:09.859 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:09.859 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.859 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.859 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:09.859 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.859 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.860 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:09.860 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:09.860 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:09.860 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:09.860 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:09.860 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:10.119 13:58:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:10.119 malloc_lvol_verify 00:17:10.378 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:10.378 06960f31-69e5-43ae-bf5a-95f1001d3ed5 00:17:10.378 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:10.637 9e745840-ccc0-4254-961b-e239cdde5af8 00:17:10.637 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:10.895 /dev/nbd0 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:10.895 mke2fs 1.47.0 (5-Feb-2023) 00:17:10.895 Discarding device blocks: 0/4096 done 00:17:10.895 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:10.895 00:17:10.895 Allocating group tables: 0/1 done 00:17:10.895 Writing inode tables: 0/1 done 00:17:10.895 Creating journal (1024 blocks): done 00:17:10.895 Writing superblocks and filesystem accounting information: 0/1 done 00:17:10.895 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:10.895 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:10.896 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:11.155 13:58:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75037 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 75037 ']' 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 75037 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75037 00:17:11.155 killing process with pid 75037 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75037' 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 75037 00:17:11.155 13:58:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 75037 00:17:12.529 ************************************ 00:17:12.529 END TEST bdev_nbd 00:17:12.529 ************************************ 00:17:12.529 13:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:12.529 00:17:12.529 real 0m10.926s 00:17:12.529 user 0m13.924s 00:17:12.529 sys 0m4.720s 00:17:12.529 13:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.529 13:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:12.529 13:58:05 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:12.529 13:58:05 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:17:12.529 13:58:05 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:17:12.529 13:58:05 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:17:12.529 13:58:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.529 13:58:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.529 13:58:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.529 ************************************ 00:17:12.529 START TEST bdev_fio 00:17:12.529 ************************************ 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:12.529 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:12.529 ************************************ 00:17:12.529 START TEST bdev_fio_rw_verify 00:17:12.529 ************************************ 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:12.529 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:12.530 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:12.530 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:12.530 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:12.530 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:12.530 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:12.530 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:12.530 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:12.530 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:12.530 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:12.530 13:58:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:12.788 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.788 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.788 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.788 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.788 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.788 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.788 fio-3.35 00:17:12.788 Starting 6 threads 00:17:25.047 00:17:25.047 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75446: Wed Dec 11 13:58:16 2024 00:17:25.047 read: IOPS=33.6k, BW=131MiB/s (138MB/s)(1312MiB/10001msec) 00:17:25.047 slat (usec): min=2, max=3801, avg= 6.30, stdev= 7.79 00:17:25.047 clat (usec): min=63, max=6867, avg=574.51, stdev=182.94 00:17:25.047 lat (usec): min=68, max=6878, avg=580.81, stdev=183.81 00:17:25.047 clat percentiles (usec): 00:17:25.047 | 50.000th=[ 611], 99.000th=[ 996], 99.900th=[ 1680], 99.990th=[ 3949], 00:17:25.047 | 99.999th=[ 5932] 00:17:25.047 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(1329MiB/10001msec); 0 zone resets 00:17:25.047 slat (usec): min=11, max=4368, avg=20.03, stdev=22.38 00:17:25.047 clat (usec): min=85, max=5346, avg=637.25, stdev=190.72 00:17:25.047 lat (usec): min=98, max=5364, avg=657.28, stdev=193.08 00:17:25.047 clat percentiles (usec): 00:17:25.047 | 50.000th=[ 652], 99.000th=[ 1205], 99.900th=[ 1926], 99.990th=[ 2835], 00:17:25.047 | 99.999th=[ 5276] 00:17:25.047 bw ( KiB/s): min=110552, max=153200, per=99.71%, avg=135670.21, stdev=1880.88, samples=114 00:17:25.047 iops : min=27638, max=38300, avg=33917.47, stdev=470.21, samples=114 00:17:25.047 lat (usec) : 100=0.01%, 250=3.72%, 500=20.12%, 750=62.27%, 1000=12.02% 00:17:25.047 lat (msec) : 2=1.79%, 4=0.07%, 10=0.01% 00:17:25.047 cpu : usr=65.46%, sys=23.50%, ctx=7895, majf=0, minf=27897 00:17:25.047 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.047 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.047 issued rwts: total=335749,340183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.047 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:25.047 00:17:25.047 Run status group 0 (all jobs): 00:17:25.047 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=1312MiB (1375MB), run=10001-10001msec 00:17:25.047 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=1329MiB (1393MB), run=10001-10001msec 00:17:25.047 ----------------------------------------------------- 00:17:25.047 Suppressions used: 00:17:25.047 count bytes template 00:17:25.047 6 48 /usr/src/fio/parse.c 00:17:25.047 4203 403488 /usr/src/fio/iolog.c 00:17:25.048 1 8 libtcmalloc_minimal.so 00:17:25.048 1 904 libcrypto.so 00:17:25.048 ----------------------------------------------------- 00:17:25.048 00:17:25.048 00:17:25.048 real 0m12.566s 00:17:25.048 user 0m41.346s 00:17:25.048 sys 0m14.521s 00:17:25.048 13:58:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.048 13:58:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:25.048 ************************************ 00:17:25.048 END TEST bdev_fio_rw_verify 00:17:25.048 ************************************ 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:25.048 13:58:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e55642cf-39b0-4a2e-ad72-0dd0d8f97acd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e55642cf-39b0-4a2e-ad72-0dd0d8f97acd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "b6f781a5-584d-445f-8bbf-a772cde21ef6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b6f781a5-584d-445f-8bbf-a772cde21ef6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "28f7078b-c804-4788-a0fd-ea1063f6643e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "28f7078b-c804-4788-a0fd-ea1063f6643e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "795b2f85-4301-419b-9185-09b8202652cd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "795b2f85-4301-419b-9185-09b8202652cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "20d92a31-cc57-44dd-b4d5-1268d1803c26"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "20d92a31-cc57-44dd-b4d5-1268d1803c26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "fb6b0aa5-df29-4a56-9f51-b52eace41758"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fb6b0aa5-df29-4a56-9f51-b52eace41758",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:25.307 13:58:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:25.307 13:58:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.307 /home/vagrant/spdk_repo/spdk 00:17:25.307 13:58:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:25.307 13:58:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:25.307 13:58:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:25.307 00:17:25.307 real 0m12.784s 00:17:25.307 user 0m41.452s 00:17:25.307 sys 0m14.640s 00:17:25.307 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.307 13:58:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:25.307 ************************************ 00:17:25.307 END TEST bdev_fio 00:17:25.307 ************************************ 00:17:25.307 13:58:18 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:25.307 13:58:18 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:25.307 13:58:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:25.307 13:58:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.307 13:58:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:25.307 ************************************ 00:17:25.307 START TEST bdev_verify 00:17:25.307 ************************************ 00:17:25.307 13:58:18 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:25.307 [2024-12-11 13:58:18.260664] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:25.307 [2024-12-11 13:58:18.260784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75627 ] 00:17:25.565 [2024-12-11 13:58:18.440453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:25.565 [2024-12-11 13:58:18.558471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.565 [2024-12-11 13:58:18.558516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.131 Running I/O for 5 seconds... 00:17:28.444 25184.00 IOPS, 98.38 MiB/s [2024-12-11T13:58:22.431Z] 25104.00 IOPS, 98.06 MiB/s [2024-12-11T13:58:23.368Z] 24416.00 IOPS, 95.38 MiB/s [2024-12-11T13:58:24.304Z] 24272.00 IOPS, 94.81 MiB/s [2024-12-11T13:58:24.304Z] 24051.20 IOPS, 93.95 MiB/s 00:17:31.257 Latency(us) 00:17:31.257 [2024-12-11T13:58:24.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.257 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0x0 length 0x80000 00:17:31.257 nvme0n1 : 5.06 1872.12 7.31 0.00 0.00 68260.91 16002.36 56850.51 00:17:31.257 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0x80000 length 0x80000 00:17:31.257 nvme0n1 : 5.07 1819.34 7.11 0.00 0.00 69611.44 8106.46 67378.38 00:17:31.257 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0x0 length 0x80000 00:17:31.257 nvme0n2 : 5.04 1880.10 7.34 0.00 0.00 67875.58 8632.85 64009.46 00:17:31.257 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0x80000 length 0x80000 00:17:31.257 nvme0n2 : 5.04 1802.46 7.04 0.00 0.00 70902.26 11001.63 64430.57 00:17:31.257 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0x0 length 0x80000 00:17:31.257 nvme0n3 : 5.07 1869.97 7.30 0.00 0.00 68161.02 10106.76 56008.28 00:17:31.257 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0x80000 length 0x80000 00:17:31.257 nvme0n3 : 5.04 1801.94 7.04 0.00 0.00 70822.70 16844.59 61061.65 00:17:31.257 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0x0 length 0x20000 00:17:31.257 nvme1n1 : 5.07 1866.69 7.29 0.00 0.00 68194.06 12528.17 58113.85 00:17:31.257 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0x20000 length 0x20000 00:17:31.257 nvme1n1 : 5.05 1799.20 7.03 0.00 0.00 70824.15 6027.21 74958.44 00:17:31.257 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0x0 length 0xbd0bd 00:17:31.257 nvme2n1 : 5.08 2711.24 10.59 0.00 0.00 46751.92 4184.83 53692.14 00:17:31.257 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:31.257 nvme2n1 : 5.05 2660.27 10.39 0.00 0.00 47794.24 5263.94 58534.97 00:17:31.257 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0x0 length 0xa0000 00:17:31.257 nvme3n1 : 5.07 1868.34 7.30 0.00 0.00 67857.28 4974.42 65693.92 00:17:31.257 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.257 Verification LBA range: start 0xa0000 length 0xa0000 00:17:31.257 nvme3n1 : 5.06 1821.46 7.12 0.00 0.00 69635.16 8632.85 66115.03 00:17:31.257 [2024-12-11T13:58:24.304Z] =================================================================================================================== 00:17:31.257 [2024-12-11T13:58:24.304Z] Total : 23773.13 92.86 0.00 0.00 64232.87 4184.83 74958.44 00:17:32.635 00:17:32.635 real 0m7.161s 00:17:32.635 user 0m10.969s 00:17:32.635 sys 0m2.068s 00:17:32.635 13:58:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.635 ************************************ 00:17:32.635 END TEST bdev_verify 00:17:32.635 ************************************ 00:17:32.635 13:58:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:32.635 13:58:25 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:32.635 13:58:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:32.635 13:58:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.635 13:58:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.635 ************************************ 00:17:32.635 START TEST bdev_verify_big_io 00:17:32.635 ************************************ 00:17:32.635 13:58:25 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:32.635 [2024-12-11 13:58:25.494023] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:32.635 [2024-12-11 13:58:25.494149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75729 ] 00:17:32.635 [2024-12-11 13:58:25.674303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:32.894 [2024-12-11 13:58:25.789680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.894 [2024-12-11 13:58:25.789712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.460 Running I/O for 5 seconds... 00:17:38.529 1168.00 IOPS, 73.00 MiB/s [2024-12-11T13:58:32.144Z] 3192.00 IOPS, 199.50 MiB/s [2024-12-11T13:58:32.404Z] 2989.33 IOPS, 186.83 MiB/s [2024-12-11T13:58:32.404Z] 2899.75 IOPS, 181.23 MiB/s 00:17:39.357 Latency(us) 00:17:39.357 [2024-12-11T13:58:32.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.357 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0x0 length 0x8000 00:17:39.357 nvme0n1 : 5.47 152.04 9.50 0.00 0.00 815925.48 96856.42 1030889.18 00:17:39.357 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0x8000 length 0x8000 00:17:39.357 nvme0n1 : 5.77 155.16 9.70 0.00 0.00 805068.45 71168.41 791695.94 00:17:39.357 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0x0 length 0x8000 00:17:39.357 nvme0n2 : 5.62 144.99 9.06 0.00 0.00 834477.85 58956.08 1468848.63 00:17:39.357 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0x8000 length 0x8000 00:17:39.357 nvme0n2 : 5.76 177.83 11.11 0.00 0.00 684445.71 104015.37 761375.67 00:17:39.357 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0x0 length 0x8000 00:17:39.357 nvme0n3 : 5.70 168.31 10.52 0.00 0.00 705704.94 104857.60 1118481.07 00:17:39.357 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0x8000 length 0x8000 00:17:39.357 nvme0n3 : 5.77 141.38 8.84 0.00 0.00 848165.32 68641.72 1805740.52 00:17:39.357 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0x0 length 0x2000 00:17:39.357 nvme1n1 : 5.71 154.22 9.64 0.00 0.00 749657.19 74958.44 1347567.55 00:17:39.357 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0x2000 length 0x2000 00:17:39.357 nvme1n1 : 5.76 155.53 9.72 0.00 0.00 741936.35 51586.57 1273451.33 00:17:39.357 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0x0 length 0xbd0b 00:17:39.357 nvme2n1 : 5.73 237.28 14.83 0.00 0.00 483563.95 7790.62 1354305.39 00:17:39.357 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:39.357 nvme2n1 : 5.77 216.38 13.52 0.00 0.00 530736.32 16107.64 741162.15 00:17:39.357 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0x0 length 0xa000 00:17:39.357 nvme3n1 : 5.74 153.37 9.59 0.00 0.00 725414.67 10948.99 1394732.41 00:17:39.357 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.357 Verification LBA range: start 0xa000 length 0xa000 00:17:39.357 nvme3n1 : 5.78 188.17 11.76 0.00 0.00 593804.03 7158.95 882656.75 00:17:39.357 [2024-12-11T13:58:32.404Z] =================================================================================================================== 00:17:39.357 [2024-12-11T13:58:32.404Z] Total : 2044.68 127.79 0.00 0.00 690863.25 7158.95 1805740.52 00:17:40.735 00:17:40.735 real 0m8.137s 00:17:40.735 user 0m14.768s 00:17:40.735 sys 0m0.579s 00:17:40.735 13:58:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.735 ************************************ 00:17:40.735 END TEST bdev_verify_big_io 00:17:40.735 ************************************ 00:17:40.735 13:58:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.735 13:58:33 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:40.735 13:58:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:40.735 13:58:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.735 13:58:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:40.735 ************************************ 00:17:40.735 START TEST bdev_write_zeroes 00:17:40.735 ************************************ 00:17:40.735 13:58:33 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:40.735 [2024-12-11 13:58:33.705039] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:40.735 [2024-12-11 13:58:33.705163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75841 ] 00:17:40.994 [2024-12-11 13:58:33.884049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.994 [2024-12-11 13:58:33.995149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.560 Running I/O for 1 seconds... 00:17:42.494 61152.00 IOPS, 238.88 MiB/s 00:17:42.494 Latency(us) 00:17:42.494 [2024-12-11T13:58:35.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.494 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.494 nvme0n1 : 1.02 9744.01 38.06 0.00 0.00 13123.10 8211.74 28846.37 00:17:42.494 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.494 nvme0n2 : 1.03 9732.80 38.02 0.00 0.00 13132.10 8474.94 28214.70 00:17:42.494 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.494 nvme0n3 : 1.03 9721.52 37.97 0.00 0.00 13139.18 8527.58 28425.25 00:17:42.494 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.494 nvme1n1 : 1.03 9710.65 37.93 0.00 0.00 13145.50 8527.58 28425.25 00:17:42.494 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.494 nvme2n1 : 1.03 11658.43 45.54 0.00 0.00 10941.33 3974.27 22845.48 00:17:42.494 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.494 nvme3n1 : 1.03 9781.79 38.21 0.00 0.00 12977.80 5132.34 28846.37 00:17:42.494 [2024-12-11T13:58:35.541Z] =================================================================================================================== 00:17:42.494 [2024-12-11T13:58:35.541Z] Total : 60349.19 235.74 0.00 0.00 12685.11 3974.27 28846.37 00:17:43.869 00:17:43.869 real 0m2.962s 00:17:43.869 user 0m2.176s 00:17:43.869 sys 0m0.588s 00:17:43.869 13:58:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.869 13:58:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:43.869 ************************************ 00:17:43.869 END TEST bdev_write_zeroes 00:17:43.869 ************************************ 00:17:43.869 13:58:36 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:43.869 13:58:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:43.869 13:58:36 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.869 13:58:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:43.869 ************************************ 00:17:43.869 START TEST bdev_json_nonenclosed 00:17:43.869 ************************************ 00:17:43.869 13:58:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:43.869 [2024-12-11 13:58:36.745121] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:43.869 [2024-12-11 13:58:36.745260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75894 ] 00:17:44.128 [2024-12-11 13:58:36.926321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.128 [2024-12-11 13:58:37.038219] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.128 [2024-12-11 13:58:37.038312] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:44.128 [2024-12-11 13:58:37.038334] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:44.128 [2024-12-11 13:58:37.038346] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:44.387 00:17:44.387 real 0m0.645s 00:17:44.387 user 0m0.404s 00:17:44.387 sys 0m0.135s 00:17:44.387 ************************************ 00:17:44.387 END TEST bdev_json_nonenclosed 00:17:44.387 ************************************ 00:17:44.387 13:58:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.387 13:58:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:44.387 13:58:37 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:44.387 13:58:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:44.387 13:58:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.387 13:58:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:44.387 ************************************ 00:17:44.387 START TEST bdev_json_nonarray 00:17:44.387 ************************************ 00:17:44.387 13:58:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:44.646 [2024-12-11 13:58:37.454883] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:44.646 [2024-12-11 13:58:37.455048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75920 ] 00:17:44.646 [2024-12-11 13:58:37.634984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.905 [2024-12-11 13:58:37.751349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.905 [2024-12-11 13:58:37.751457] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:44.905 [2024-12-11 13:58:37.751480] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:44.905 [2024-12-11 13:58:37.751492] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:45.187 00:17:45.187 real 0m0.649s 00:17:45.187 user 0m0.401s 00:17:45.187 sys 0m0.143s 00:17:45.187 13:58:38 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.187 ************************************ 00:17:45.187 END TEST bdev_json_nonarray 00:17:45.187 ************************************ 00:17:45.187 13:58:38 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:45.187 13:58:38 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:45.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:53.872 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:53.872 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:53.872 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:53.872 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:53.872 00:17:53.872 real 1m1.977s 00:17:53.872 user 1m39.610s 00:17:53.872 sys 0m43.843s 00:17:53.872 13:58:45 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.872 13:58:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.872 ************************************ 00:17:53.872 END TEST blockdev_xnvme 00:17:53.872 ************************************ 00:17:53.872 13:58:45 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:53.872 13:58:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.872 13:58:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.872 13:58:45 -- common/autotest_common.sh@10 -- # set +x 00:17:53.872 ************************************ 00:17:53.872 START TEST ublk 00:17:53.872 ************************************ 00:17:53.872 13:58:45 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:53.872 * Looking for test storage... 00:17:53.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:53.872 13:58:46 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.872 13:58:46 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.872 13:58:46 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.872 13:58:46 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.872 13:58:46 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.872 13:58:46 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.872 13:58:46 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.872 13:58:46 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.872 13:58:46 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.872 13:58:46 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.872 13:58:46 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.872 13:58:46 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:53.872 13:58:46 ublk -- scripts/common.sh@345 -- # : 1 00:17:53.872 13:58:46 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.872 13:58:46 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.872 13:58:46 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:53.872 13:58:46 ublk -- scripts/common.sh@353 -- # local d=1 00:17:53.872 13:58:46 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.872 13:58:46 ublk -- scripts/common.sh@355 -- # echo 1 00:17:53.872 13:58:46 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.872 13:58:46 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:53.872 13:58:46 ublk -- scripts/common.sh@353 -- # local d=2 00:17:53.872 13:58:46 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.872 13:58:46 ublk -- scripts/common.sh@355 -- # echo 2 00:17:53.872 13:58:46 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.872 13:58:46 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.872 13:58:46 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.872 13:58:46 ublk -- scripts/common.sh@368 -- # return 0 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:53.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.872 --rc genhtml_branch_coverage=1 00:17:53.872 --rc genhtml_function_coverage=1 00:17:53.872 --rc genhtml_legend=1 00:17:53.872 --rc geninfo_all_blocks=1 00:17:53.872 --rc geninfo_unexecuted_blocks=1 00:17:53.872 00:17:53.872 ' 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:53.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.872 --rc genhtml_branch_coverage=1 00:17:53.872 --rc genhtml_function_coverage=1 00:17:53.872 --rc genhtml_legend=1 00:17:53.872 --rc geninfo_all_blocks=1 00:17:53.872 --rc geninfo_unexecuted_blocks=1 00:17:53.872 00:17:53.872 ' 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:53.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.872 --rc genhtml_branch_coverage=1 00:17:53.872 --rc genhtml_function_coverage=1 00:17:53.872 --rc genhtml_legend=1 00:17:53.872 --rc geninfo_all_blocks=1 00:17:53.872 --rc geninfo_unexecuted_blocks=1 00:17:53.872 00:17:53.872 ' 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:53.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.872 --rc genhtml_branch_coverage=1 00:17:53.872 --rc genhtml_function_coverage=1 00:17:53.872 --rc genhtml_legend=1 00:17:53.872 --rc geninfo_all_blocks=1 00:17:53.872 --rc geninfo_unexecuted_blocks=1 00:17:53.872 00:17:53.872 ' 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:53.872 13:58:46 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:53.872 13:58:46 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:53.872 13:58:46 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:53.872 13:58:46 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:53.872 13:58:46 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:53.872 13:58:46 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:53.872 13:58:46 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:53.872 13:58:46 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:53.872 13:58:46 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.872 13:58:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.872 ************************************ 00:17:53.872 START TEST test_save_ublk_config 00:17:53.872 ************************************ 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76229 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76229 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76229 ']' 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.872 13:58:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:53.872 [2024-12-11 13:58:46.284054] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:53.872 [2024-12-11 13:58:46.284181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76229 ] 00:17:53.872 [2024-12-11 13:58:46.460337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.872 [2024-12-11 13:58:46.576494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.810 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.810 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:54.810 13:58:47 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:54.810 13:58:47 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:54.810 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.810 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:54.810 [2024-12-11 13:58:47.496847] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:54.810 [2024-12-11 13:58:47.497977] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:54.810 malloc0 00:17:54.810 [2024-12-11 13:58:47.586972] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:54.810 [2024-12-11 13:58:47.587063] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:54.810 [2024-12-11 13:58:47.587076] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:54.810 [2024-12-11 13:58:47.587084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:54.810 [2024-12-11 13:58:47.591118] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:54.810 [2024-12-11 13:58:47.591140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:54.811 [2024-12-11 13:58:47.601862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:54.811 [2024-12-11 13:58:47.601963] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:54.811 [2024-12-11 13:58:47.625862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:54.811 0 00:17:54.811 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.811 13:58:47 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:54.811 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.811 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:55.070 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.070 13:58:47 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:55.070 "subsystems": [ 00:17:55.070 { 00:17:55.070 "subsystem": "fsdev", 00:17:55.070 "config": [ 00:17:55.070 { 00:17:55.070 "method": "fsdev_set_opts", 00:17:55.070 "params": { 00:17:55.070 "fsdev_io_pool_size": 65535, 00:17:55.070 "fsdev_io_cache_size": 256 00:17:55.070 } 00:17:55.070 } 00:17:55.070 ] 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "keyring", 00:17:55.070 "config": [] 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "iobuf", 00:17:55.070 "config": [ 00:17:55.070 { 00:17:55.070 "method": "iobuf_set_options", 00:17:55.070 "params": { 00:17:55.070 "small_pool_count": 8192, 00:17:55.070 "large_pool_count": 1024, 00:17:55.070 "small_bufsize": 8192, 00:17:55.070 "large_bufsize": 135168, 00:17:55.070 "enable_numa": false 00:17:55.070 } 00:17:55.070 } 00:17:55.070 ] 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "sock", 00:17:55.070 "config": [ 00:17:55.070 { 00:17:55.070 "method": "sock_set_default_impl", 00:17:55.070 "params": { 00:17:55.070 "impl_name": "posix" 00:17:55.070 } 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "method": "sock_impl_set_options", 00:17:55.070 "params": { 00:17:55.070 "impl_name": "ssl", 00:17:55.070 "recv_buf_size": 4096, 00:17:55.070 "send_buf_size": 4096, 00:17:55.070 "enable_recv_pipe": true, 00:17:55.070 "enable_quickack": false, 00:17:55.070 "enable_placement_id": 0, 00:17:55.070 "enable_zerocopy_send_server": true, 00:17:55.070 "enable_zerocopy_send_client": false, 00:17:55.070 "zerocopy_threshold": 0, 00:17:55.070 "tls_version": 0, 00:17:55.070 "enable_ktls": false 00:17:55.070 } 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "method": "sock_impl_set_options", 00:17:55.070 "params": { 00:17:55.070 "impl_name": "posix", 00:17:55.070 "recv_buf_size": 2097152, 00:17:55.070 "send_buf_size": 2097152, 00:17:55.070 "enable_recv_pipe": true, 00:17:55.070 "enable_quickack": false, 00:17:55.070 "enable_placement_id": 0, 00:17:55.070 "enable_zerocopy_send_server": true, 00:17:55.070 "enable_zerocopy_send_client": false, 00:17:55.070 "zerocopy_threshold": 0, 00:17:55.070 "tls_version": 0, 00:17:55.070 "enable_ktls": false 00:17:55.070 } 00:17:55.070 } 00:17:55.070 ] 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "vmd", 00:17:55.070 "config": [] 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "accel", 00:17:55.070 "config": [ 00:17:55.070 { 00:17:55.070 "method": "accel_set_options", 00:17:55.070 "params": { 00:17:55.070 "small_cache_size": 128, 00:17:55.070 "large_cache_size": 16, 00:17:55.070 "task_count": 2048, 00:17:55.070 "sequence_count": 2048, 00:17:55.070 "buf_count": 2048 00:17:55.070 } 00:17:55.070 } 00:17:55.070 ] 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "bdev", 00:17:55.070 "config": [ 00:17:55.070 { 00:17:55.070 "method": "bdev_set_options", 00:17:55.070 "params": { 00:17:55.070 "bdev_io_pool_size": 65535, 00:17:55.070 "bdev_io_cache_size": 256, 00:17:55.070 "bdev_auto_examine": true, 00:17:55.070 "iobuf_small_cache_size": 128, 00:17:55.070 "iobuf_large_cache_size": 16 00:17:55.070 } 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "method": "bdev_raid_set_options", 00:17:55.070 "params": { 00:17:55.070 "process_window_size_kb": 1024, 00:17:55.070 "process_max_bandwidth_mb_sec": 0 00:17:55.070 } 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "method": "bdev_iscsi_set_options", 00:17:55.070 "params": { 00:17:55.070 "timeout_sec": 30 00:17:55.070 } 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "method": "bdev_nvme_set_options", 00:17:55.070 "params": { 00:17:55.070 "action_on_timeout": "none", 00:17:55.070 "timeout_us": 0, 00:17:55.070 "timeout_admin_us": 0, 00:17:55.070 "keep_alive_timeout_ms": 10000, 00:17:55.070 "arbitration_burst": 0, 00:17:55.070 "low_priority_weight": 0, 00:17:55.070 "medium_priority_weight": 0, 00:17:55.070 "high_priority_weight": 0, 00:17:55.070 "nvme_adminq_poll_period_us": 10000, 00:17:55.070 "nvme_ioq_poll_period_us": 0, 00:17:55.070 "io_queue_requests": 0, 00:17:55.070 "delay_cmd_submit": true, 00:17:55.070 "transport_retry_count": 4, 00:17:55.070 "bdev_retry_count": 3, 00:17:55.070 "transport_ack_timeout": 0, 00:17:55.070 "ctrlr_loss_timeout_sec": 0, 00:17:55.070 "reconnect_delay_sec": 0, 00:17:55.070 "fast_io_fail_timeout_sec": 0, 00:17:55.070 "disable_auto_failback": false, 00:17:55.070 "generate_uuids": false, 00:17:55.070 "transport_tos": 0, 00:17:55.070 "nvme_error_stat": false, 00:17:55.070 "rdma_srq_size": 0, 00:17:55.070 "io_path_stat": false, 00:17:55.070 "allow_accel_sequence": false, 00:17:55.070 "rdma_max_cq_size": 0, 00:17:55.070 "rdma_cm_event_timeout_ms": 0, 00:17:55.070 "dhchap_digests": [ 00:17:55.070 "sha256", 00:17:55.070 "sha384", 00:17:55.070 "sha512" 00:17:55.070 ], 00:17:55.070 "dhchap_dhgroups": [ 00:17:55.070 "null", 00:17:55.070 "ffdhe2048", 00:17:55.070 "ffdhe3072", 00:17:55.070 "ffdhe4096", 00:17:55.070 "ffdhe6144", 00:17:55.070 "ffdhe8192" 00:17:55.070 ], 00:17:55.070 "rdma_umr_per_io": false 00:17:55.070 } 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "method": "bdev_nvme_set_hotplug", 00:17:55.070 "params": { 00:17:55.070 "period_us": 100000, 00:17:55.070 "enable": false 00:17:55.070 } 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "method": "bdev_malloc_create", 00:17:55.070 "params": { 00:17:55.070 "name": "malloc0", 00:17:55.070 "num_blocks": 8192, 00:17:55.070 "block_size": 4096, 00:17:55.070 "physical_block_size": 4096, 00:17:55.070 "uuid": "5694a810-5c03-40c5-be21-7363d33c11cd", 00:17:55.070 "optimal_io_boundary": 0, 00:17:55.070 "md_size": 0, 00:17:55.070 "dif_type": 0, 00:17:55.070 "dif_is_head_of_md": false, 00:17:55.070 "dif_pi_format": 0 00:17:55.070 } 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "method": "bdev_wait_for_examine" 00:17:55.070 } 00:17:55.070 ] 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "scsi", 00:17:55.070 "config": null 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "scheduler", 00:17:55.070 "config": [ 00:17:55.070 { 00:17:55.070 "method": "framework_set_scheduler", 00:17:55.070 "params": { 00:17:55.070 "name": "static" 00:17:55.070 } 00:17:55.070 } 00:17:55.070 ] 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "vhost_scsi", 00:17:55.070 "config": [] 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "vhost_blk", 00:17:55.070 "config": [] 00:17:55.070 }, 00:17:55.070 { 00:17:55.070 "subsystem": "ublk", 00:17:55.070 "config": [ 00:17:55.070 { 00:17:55.070 "method": "ublk_create_target", 00:17:55.070 "params": { 00:17:55.070 "cpumask": "1" 00:17:55.070 } 00:17:55.070 }, 00:17:55.071 { 00:17:55.071 "method": "ublk_start_disk", 00:17:55.071 "params": { 00:17:55.071 "bdev_name": "malloc0", 00:17:55.071 "ublk_id": 0, 00:17:55.071 "num_queues": 1, 00:17:55.071 "queue_depth": 128 00:17:55.071 } 00:17:55.071 } 00:17:55.071 ] 00:17:55.071 }, 00:17:55.071 { 00:17:55.071 "subsystem": "nbd", 00:17:55.071 "config": [] 00:17:55.071 }, 00:17:55.071 { 00:17:55.071 "subsystem": "nvmf", 00:17:55.071 "config": [ 00:17:55.071 { 00:17:55.071 "method": "nvmf_set_config", 00:17:55.071 "params": { 00:17:55.071 "discovery_filter": "match_any", 00:17:55.071 "admin_cmd_passthru": { 00:17:55.071 "identify_ctrlr": false 00:17:55.071 }, 00:17:55.071 "dhchap_digests": [ 00:17:55.071 "sha256", 00:17:55.071 "sha384", 00:17:55.071 "sha512" 00:17:55.071 ], 00:17:55.071 "dhchap_dhgroups": [ 00:17:55.071 "null", 00:17:55.071 "ffdhe2048", 00:17:55.071 "ffdhe3072", 00:17:55.071 "ffdhe4096", 00:17:55.071 "ffdhe6144", 00:17:55.071 "ffdhe8192" 00:17:55.071 ] 00:17:55.071 } 00:17:55.071 }, 00:17:55.071 { 00:17:55.071 "method": "nvmf_set_max_subsystems", 00:17:55.071 "params": { 00:17:55.071 "max_subsystems": 1024 00:17:55.071 } 00:17:55.071 }, 00:17:55.071 { 00:17:55.071 "method": "nvmf_set_crdt", 00:17:55.071 "params": { 00:17:55.071 "crdt1": 0, 00:17:55.071 "crdt2": 0, 00:17:55.071 "crdt3": 0 00:17:55.071 } 00:17:55.071 } 00:17:55.071 ] 00:17:55.071 }, 00:17:55.071 { 00:17:55.071 "subsystem": "iscsi", 00:17:55.071 "config": [ 00:17:55.071 { 00:17:55.071 "method": "iscsi_set_options", 00:17:55.071 "params": { 00:17:55.071 "node_base": "iqn.2016-06.io.spdk", 00:17:55.071 "max_sessions": 128, 00:17:55.071 "max_connections_per_session": 2, 00:17:55.071 "max_queue_depth": 64, 00:17:55.071 "default_time2wait": 2, 00:17:55.071 "default_time2retain": 20, 00:17:55.071 "first_burst_length": 8192, 00:17:55.071 "immediate_data": true, 00:17:55.071 "allow_duplicated_isid": false, 00:17:55.071 "error_recovery_level": 0, 00:17:55.071 "nop_timeout": 60, 00:17:55.071 "nop_in_interval": 30, 00:17:55.071 "disable_chap": false, 00:17:55.071 "require_chap": false, 00:17:55.071 "mutual_chap": false, 00:17:55.071 "chap_group": 0, 00:17:55.071 "max_large_datain_per_connection": 64, 00:17:55.071 "max_r2t_per_connection": 4, 00:17:55.071 "pdu_pool_size": 36864, 00:17:55.071 "immediate_data_pool_size": 16384, 00:17:55.071 "data_out_pool_size": 2048 00:17:55.071 } 00:17:55.071 } 00:17:55.071 ] 00:17:55.071 } 00:17:55.071 ] 00:17:55.071 }' 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76229 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76229 ']' 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76229 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76229 00:17:55.071 killing process with pid 76229 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76229' 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76229 00:17:55.071 13:58:47 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76229 00:17:56.454 [2024-12-11 13:58:49.414424] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:56.454 [2024-12-11 13:58:49.450868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:56.454 [2024-12-11 13:58:49.451004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:56.454 [2024-12-11 13:58:49.461869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:56.454 [2024-12-11 13:58:49.461922] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:56.454 [2024-12-11 13:58:49.461938] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:56.454 [2024-12-11 13:58:49.461965] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:56.454 [2024-12-11 13:58:49.462140] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:58.990 13:58:51 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76300 00:17:58.990 13:58:51 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76300 00:17:58.990 13:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76300 ']' 00:17:58.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.990 13:58:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:58.990 13:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.990 13:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.990 13:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.990 13:58:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:58.990 "subsystems": [ 00:17:58.990 { 00:17:58.990 "subsystem": "fsdev", 00:17:58.990 "config": [ 00:17:58.990 { 00:17:58.990 "method": "fsdev_set_opts", 00:17:58.990 "params": { 00:17:58.990 "fsdev_io_pool_size": 65535, 00:17:58.990 "fsdev_io_cache_size": 256 00:17:58.990 } 00:17:58.990 } 00:17:58.990 ] 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "subsystem": "keyring", 00:17:58.990 "config": [] 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "subsystem": "iobuf", 00:17:58.990 "config": [ 00:17:58.990 { 00:17:58.990 "method": "iobuf_set_options", 00:17:58.990 "params": { 00:17:58.990 "small_pool_count": 8192, 00:17:58.990 "large_pool_count": 1024, 00:17:58.990 "small_bufsize": 8192, 00:17:58.990 "large_bufsize": 135168, 00:17:58.990 "enable_numa": false 00:17:58.990 } 00:17:58.990 } 00:17:58.990 ] 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "subsystem": "sock", 00:17:58.990 "config": [ 00:17:58.990 { 00:17:58.990 "method": "sock_set_default_impl", 00:17:58.990 "params": { 00:17:58.990 "impl_name": "posix" 00:17:58.990 } 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "method": "sock_impl_set_options", 00:17:58.990 "params": { 00:17:58.990 "impl_name": "ssl", 00:17:58.990 "recv_buf_size": 4096, 00:17:58.990 "send_buf_size": 4096, 00:17:58.990 "enable_recv_pipe": true, 00:17:58.990 "enable_quickack": false, 00:17:58.990 "enable_placement_id": 0, 00:17:58.990 "enable_zerocopy_send_server": true, 00:17:58.990 "enable_zerocopy_send_client": false, 00:17:58.990 "zerocopy_threshold": 0, 00:17:58.990 "tls_version": 0, 00:17:58.990 "enable_ktls": false 00:17:58.990 } 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "method": "sock_impl_set_options", 00:17:58.990 "params": { 00:17:58.990 "impl_name": "posix", 00:17:58.990 "recv_buf_size": 2097152, 00:17:58.990 "send_buf_size": 2097152, 00:17:58.990 "enable_recv_pipe": true, 00:17:58.990 "enable_quickack": false, 00:17:58.990 "enable_placement_id": 0, 00:17:58.990 "enable_zerocopy_send_server": true, 00:17:58.990 "enable_zerocopy_send_client": false, 00:17:58.990 "zerocopy_threshold": 0, 00:17:58.990 "tls_version": 0, 00:17:58.990 "enable_ktls": false 00:17:58.990 } 00:17:58.990 } 00:17:58.990 ] 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "subsystem": "vmd", 00:17:58.990 "config": [] 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "subsystem": "accel", 00:17:58.990 "config": [ 00:17:58.990 { 00:17:58.990 "method": "accel_set_options", 00:17:58.990 "params": { 00:17:58.990 "small_cache_size": 128, 00:17:58.990 "large_cache_size": 16, 00:17:58.990 "task_count": 2048, 00:17:58.990 "sequence_count": 2048, 00:17:58.990 "buf_count": 2048 00:17:58.990 } 00:17:58.990 } 00:17:58.990 ] 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "subsystem": "bdev", 00:17:58.990 "config": [ 00:17:58.990 { 00:17:58.990 "method": "bdev_set_options", 00:17:58.990 "params": { 00:17:58.990 "bdev_io_pool_size": 65535, 00:17:58.990 "bdev_io_cache_size": 256, 00:17:58.990 "bdev_auto_examine": true, 00:17:58.990 "iobuf_small_cache_size": 128, 00:17:58.990 "iobuf_large_cache_size": 16 00:17:58.990 } 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "method": "bdev_raid_set_options", 00:17:58.990 "params": { 00:17:58.990 "process_window_size_kb": 1024, 00:17:58.990 "process_max_bandwidth_mb_sec": 0 00:17:58.990 } 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "method": "bdev_iscsi_set_options", 00:17:58.990 "params": { 00:17:58.990 "timeout_sec": 30 00:17:58.990 } 00:17:58.990 }, 00:17:58.990 { 00:17:58.990 "method": "bdev_nvme_set_options", 00:17:58.990 "params": { 00:17:58.990 "action_on_timeout": "none", 00:17:58.990 "timeout_us": 0, 00:17:58.990 "timeout_admin_us": 0, 00:17:58.990 "keep_alive_timeout_ms": 10000, 00:17:58.990 "arbitration_burst": 0, 00:17:58.990 "low_priority_weight": 0, 00:17:58.990 "medium_priority_weight": 0, 00:17:58.990 "high_priority_weight": 0, 00:17:58.990 "nvme_adminq_poll_period_us": 10000, 00:17:58.991 "nvme_ioq_poll_period_us": 0, 00:17:58.991 "io_queue_requests": 0, 00:17:58.991 "delay_cmd_submit": true, 00:17:58.991 "transport_retry_count": 4, 00:17:58.991 "bdev_retry_count": 3, 00:17:58.991 "transport_ack_timeout": 0, 00:17:58.991 "ctrlr_loss_timeout_sec": 0, 00:17:58.991 "reconnect_delay_sec": 0, 00:17:58.991 "fast_io_fail_timeout_sec": 0, 00:17:58.991 "disable_auto_failback": false, 00:17:58.991 "generate_uuids": false, 00:17:58.991 "transport_tos": 0, 00:17:58.991 "nvme_error_stat": false, 00:17:58.991 "rdma_srq_size": 0, 00:17:58.991 "io_path_stat": false, 00:17:58.991 "allow_accel_sequence": false, 00:17:58.991 "rdma_max_cq_size": 0, 00:17:58.991 "rdma_cm_event_timeout_ms": 0, 00:17:58.991 "dhchap_digests": [ 00:17:58.991 "sha256", 00:17:58.991 "sha384", 00:17:58.991 "sha512" 00:17:58.991 ], 00:17:58.991 "dhchap_dhgroups": [ 00:17:58.991 "null", 00:17:58.991 "ffdhe2048", 00:17:58.991 "ffdhe3072", 00:17:58.991 "ffdhe4096", 00:17:58.991 "ffdhe6144", 00:17:58.991 "ffdhe8192" 00:17:58.991 ], 00:17:58.991 "rdma_umr_per_io": false 00:17:58.991 } 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "method": "bdev_nvme_set_hotplug", 00:17:58.991 "params": { 00:17:58.991 "period_us": 100000, 00:17:58.991 "enable": false 00:17:58.991 } 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "method": "bdev_malloc_create", 00:17:58.991 "params": { 00:17:58.991 "name": "malloc0", 00:17:58.991 "num_blocks": 8192, 00:17:58.991 "block_size": 4096, 00:17:58.991 "physical_block_size": 4096, 00:17:58.991 "uuid": "5694a810-5c03-40c5-be21-7363d33c11cd", 00:17:58.991 "optimal_io_boundary": 0, 00:17:58.991 "md_size": 0, 00:17:58.991 "dif_type": 0, 00:17:58.991 "dif_is_head_of_md": false, 00:17:58.991 "dif_pi_format": 0 00:17:58.991 } 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "method": "bdev_wait_for_examine" 00:17:58.991 } 00:17:58.991 ] 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "subsystem": "scsi", 00:17:58.991 "config": null 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "subsystem": "scheduler", 00:17:58.991 "config": [ 00:17:58.991 { 00:17:58.991 "method": "framework_set_scheduler", 00:17:58.991 "params": { 00:17:58.991 "name": "static" 00:17:58.991 } 00:17:58.991 } 00:17:58.991 ] 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "subsystem": "vhost_scsi", 00:17:58.991 "config": [] 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "subsystem": "vhost_blk", 00:17:58.991 "config": [] 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "subsystem": "ublk", 00:17:58.991 "config": [ 00:17:58.991 { 00:17:58.991 "method": "ublk_create_target", 00:17:58.991 "params": { 00:17:58.991 "cpumask": "1" 00:17:58.991 } 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "method": "ublk_start_disk", 00:17:58.991 "params": { 00:17:58.991 "bdev_name": "malloc0", 00:17:58.991 "ublk_id": 0, 00:17:58.991 "num_queues": 1, 00:17:58.991 "queue_depth": 128 00:17:58.991 } 00:17:58.991 } 00:17:58.991 ] 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "subsystem": "nbd", 00:17:58.991 "config": [] 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "subsystem": "nvmf", 00:17:58.991 "config": [ 00:17:58.991 { 00:17:58.991 "method": "nvmf_set_config", 00:17:58.991 "params": { 00:17:58.991 "discovery_filter": "match_any", 00:17:58.991 "admin_cmd_passthru": { 00:17:58.991 "identify_ctrlr": false 00:17:58.991 }, 00:17:58.991 "dhchap_digests": [ 00:17:58.991 "sha256", 00:17:58.991 "sha384", 00:17:58.991 "sha512" 00:17:58.991 ], 00:17:58.991 "dhchap_dhgroups": [ 00:17:58.991 "null", 00:17:58.991 "ffdhe2048", 00:17:58.991 "ffdhe3072", 00:17:58.991 "ffdhe4096", 00:17:58.991 "ffdhe6144", 00:17:58.991 "ffdhe8192" 00:17:58.991 ] 00:17:58.991 } 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "method": "nvmf_set_max_subsystems", 00:17:58.991 "params": { 00:17:58.991 "max_subsystems": 1024 00:17:58.991 } 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "method": "nvmf_set_crdt", 00:17:58.991 "params": { 00:17:58.991 "crdt1": 0, 00:17:58.991 "crdt2": 0, 00:17:58.991 "crdt3": 0 00:17:58.991 } 00:17:58.991 } 00:17:58.991 ] 00:17:58.991 }, 00:17:58.991 { 00:17:58.991 "subsystem": "iscsi", 00:17:58.991 "config": [ 00:17:58.991 { 00:17:58.991 "method": "iscsi_set_options", 00:17:58.991 "params": { 00:17:58.991 "node_base": "iqn.2016-06.io.spdk", 00:17:58.991 "max_sessions": 128, 00:17:58.991 "max_connections_per_session": 2, 00:17:58.991 "max_queue_depth": 64, 00:17:58.991 "default_time2wait": 2, 00:17:58.991 "default_time2retain": 20, 00:17:58.991 "first_burst_length": 8192, 00:17:58.991 "immediate_data": true, 00:17:58.991 "allow_duplicated_isid": false, 00:17:58.991 "error_recovery_level": 0, 00:17:58.991 "nop_timeout": 60, 00:17:58.991 "nop_in_interval": 30, 00:17:58.991 "disable_chap": false, 00:17:58.991 "require_chap": false, 00:17:58.991 "mutual_chap": false, 00:17:58.991 "chap_group": 0, 00:17:58.992 "max_large_datain_per_connection": 64, 00:17:58.992 "max_r2t_per_connection": 4, 00:17:58.992 "pdu_pool_size": 36864, 00:17:58.992 "immediate_data_pool_size": 16384, 00:17:58.992 "data_out_pool_size": 2048 00:17:58.992 } 00:17:58.992 } 00:17:58.992 ] 00:17:58.992 } 00:17:58.992 ] 00:17:58.992 }' 00:17:58.992 13:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.992 13:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:58.992 [2024-12-11 13:58:51.736141] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:58.992 [2024-12-11 13:58:51.736271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76300 ] 00:17:58.992 [2024-12-11 13:58:51.918703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.992 [2024-12-11 13:58:52.027608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.370 [2024-12-11 13:58:53.028851] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:00.370 [2024-12-11 13:58:53.029970] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:00.370 [2024-12-11 13:58:53.036971] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:00.370 [2024-12-11 13:58:53.037073] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:00.370 [2024-12-11 13:58:53.037086] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:00.370 [2024-12-11 13:58:53.037095] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:00.370 [2024-12-11 13:58:53.045934] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:00.370 [2024-12-11 13:58:53.045958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:00.370 [2024-12-11 13:58:53.052852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:00.370 [2024-12-11 13:58:53.052942] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:00.370 [2024-12-11 13:58:53.069858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76300 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76300 ']' 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76300 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76300 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.370 killing process with pid 76300 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76300' 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76300 00:18:00.370 13:58:53 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76300 00:18:01.747 [2024-12-11 13:58:54.725531] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:01.747 [2024-12-11 13:58:54.766925] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:01.747 [2024-12-11 13:58:54.767042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:01.747 [2024-12-11 13:58:54.774856] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:01.747 [2024-12-11 13:58:54.774909] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:01.747 [2024-12-11 13:58:54.774919] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:01.747 [2024-12-11 13:58:54.774943] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:01.747 [2024-12-11 13:58:54.775082] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:03.653 13:58:56 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:03.653 00:18:03.653 real 0m10.487s 00:18:03.653 user 0m7.825s 00:18:03.653 sys 0m3.469s 00:18:03.653 13:58:56 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.653 13:58:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:03.653 ************************************ 00:18:03.653 END TEST test_save_ublk_config 00:18:03.653 ************************************ 00:18:03.912 13:58:56 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76386 00:18:03.912 13:58:56 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:03.912 13:58:56 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.912 13:58:56 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76386 00:18:03.912 13:58:56 ublk -- common/autotest_common.sh@835 -- # '[' -z 76386 ']' 00:18:03.912 13:58:56 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.912 13:58:56 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.912 13:58:56 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.912 13:58:56 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.912 13:58:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:03.912 [2024-12-11 13:58:56.828760] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:03.912 [2024-12-11 13:58:56.828900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76386 ] 00:18:04.171 [2024-12-11 13:58:57.011551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:04.171 [2024-12-11 13:58:57.133702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.171 [2024-12-11 13:58:57.133706] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.105 13:58:58 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.105 13:58:58 ublk -- common/autotest_common.sh@868 -- # return 0 00:18:05.105 13:58:58 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:05.105 13:58:58 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:05.105 13:58:58 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.105 13:58:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:05.105 ************************************ 00:18:05.105 START TEST test_create_ublk 00:18:05.105 ************************************ 00:18:05.105 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:18:05.105 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:05.105 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.105 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:05.105 [2024-12-11 13:58:58.055847] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:05.105 [2024-12-11 13:58:58.058685] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:05.105 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.105 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:05.105 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:05.105 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.105 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:05.364 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.364 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:05.364 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:05.364 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.364 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:05.364 [2024-12-11 13:58:58.371007] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:05.364 [2024-12-11 13:58:58.371456] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:05.364 [2024-12-11 13:58:58.371476] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:05.364 [2024-12-11 13:58:58.371485] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:05.364 [2024-12-11 13:58:58.380142] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:05.364 [2024-12-11 13:58:58.380168] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:05.364 [2024-12-11 13:58:58.386857] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:05.364 [2024-12-11 13:58:58.387438] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:05.364 [2024-12-11 13:58:58.402871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:05.624 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:05.624 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.624 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:05.624 13:58:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:05.624 { 00:18:05.624 "ublk_device": "/dev/ublkb0", 00:18:05.624 "id": 0, 00:18:05.624 "queue_depth": 512, 00:18:05.624 "num_queues": 4, 00:18:05.624 "bdev_name": "Malloc0" 00:18:05.624 } 00:18:05.624 ]' 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:05.624 13:58:58 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:05.624 13:58:58 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:05.883 fio: verification read phase will never start because write phase uses all of runtime 00:18:05.883 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:05.883 fio-3.35 00:18:05.883 Starting 1 process 00:18:15.860 00:18:15.860 fio_test: (groupid=0, jobs=1): err= 0: pid=76438: Wed Dec 11 13:59:08 2024 00:18:15.860 write: IOPS=16.6k, BW=65.0MiB/s (68.2MB/s)(650MiB/10001msec); 0 zone resets 00:18:15.860 clat (usec): min=37, max=4030, avg=59.28, stdev=99.86 00:18:15.860 lat (usec): min=38, max=4030, avg=59.73, stdev=99.86 00:18:15.860 clat percentiles (usec): 00:18:15.860 | 1.00th=[ 40], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:18:15.860 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 56], 00:18:15.860 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 61], 95.00th=[ 63], 00:18:15.860 | 99.00th=[ 70], 99.50th=[ 74], 99.90th=[ 2114], 99.95th=[ 2835], 00:18:15.860 | 99.99th=[ 3589] 00:18:15.860 bw ( KiB/s): min=65576, max=75672, per=100.00%, avg=66702.74, stdev=2217.80, samples=19 00:18:15.860 iops : min=16394, max=18918, avg=16675.68, stdev=554.45, samples=19 00:18:15.860 lat (usec) : 50=3.66%, 100=96.12%, 250=0.02%, 500=0.01%, 750=0.01% 00:18:15.860 lat (usec) : 1000=0.01% 00:18:15.860 lat (msec) : 2=0.06%, 4=0.11%, 10=0.01% 00:18:15.860 cpu : usr=3.05%, sys=11.16%, ctx=166434, majf=0, minf=795 00:18:15.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.860 issued rwts: total=0,166434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.860 00:18:15.860 Run status group 0 (all jobs): 00:18:15.860 WRITE: bw=65.0MiB/s (68.2MB/s), 65.0MiB/s-65.0MiB/s (68.2MB/s-68.2MB/s), io=650MiB (682MB), run=10001-10001msec 00:18:15.860 00:18:15.860 Disk stats (read/write): 00:18:15.860 ublkb0: ios=0/164716, merge=0/0, ticks=0/8571, in_queue=8571, util=99.15% 00:18:15.860 13:59:08 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:15.860 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.860 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.860 [2024-12-11 13:59:08.904992] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:16.120 [2024-12-11 13:59:08.940287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:16.120 [2024-12-11 13:59:08.941151] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:16.120 [2024-12-11 13:59:08.949863] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:16.120 [2024-12-11 13:59:08.950133] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:16.120 [2024-12-11 13:59:08.950148] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.120 13:59:08 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.120 [2024-12-11 13:59:08.973930] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:16.120 request: 00:18:16.120 { 00:18:16.120 "ublk_id": 0, 00:18:16.120 "method": "ublk_stop_disk", 00:18:16.120 "req_id": 1 00:18:16.120 } 00:18:16.120 Got JSON-RPC error response 00:18:16.120 response: 00:18:16.120 { 00:18:16.120 "code": -19, 00:18:16.120 "message": "No such device" 00:18:16.120 } 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.120 13:59:08 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.120 13:59:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.120 [2024-12-11 13:59:08.996944] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:16.120 [2024-12-11 13:59:09.004843] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:16.120 [2024-12-11 13:59:09.004884] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:16.120 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.120 13:59:09 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:16.120 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.120 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.055 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.055 13:59:09 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:17.055 13:59:09 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:17.055 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.055 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.055 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.055 13:59:09 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:17.055 13:59:09 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:17.055 13:59:09 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:17.055 13:59:09 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:17.055 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.055 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.055 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.055 13:59:09 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:17.055 13:59:09 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:17.055 ************************************ 00:18:17.055 END TEST test_create_ublk 00:18:17.055 ************************************ 00:18:17.055 13:59:09 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:17.055 00:18:17.055 real 0m11.808s 00:18:17.055 user 0m0.709s 00:18:17.055 sys 0m1.237s 00:18:17.055 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.055 13:59:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.055 13:59:09 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:17.055 13:59:09 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:17.055 13:59:09 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.055 13:59:09 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.055 ************************************ 00:18:17.055 START TEST test_create_multi_ublk 00:18:17.055 ************************************ 00:18:17.055 13:59:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:18:17.055 13:59:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:17.055 13:59:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.055 13:59:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.055 [2024-12-11 13:59:09.936841] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:17.055 [2024-12-11 13:59:09.939364] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:17.055 13:59:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.055 13:59:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:17.056 13:59:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:17.056 13:59:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:17.056 13:59:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:17.056 13:59:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.056 13:59:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.314 [2024-12-11 13:59:10.219004] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:17.314 [2024-12-11 13:59:10.219437] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:17.314 [2024-12-11 13:59:10.219454] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:17.314 [2024-12-11 13:59:10.219467] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:17.314 [2024-12-11 13:59:10.228109] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:17.314 [2024-12-11 13:59:10.228139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:17.314 [2024-12-11 13:59:10.234864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:17.314 [2024-12-11 13:59:10.235435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:17.314 [2024-12-11 13:59:10.244931] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.314 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.572 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.572 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:17.572 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:17.572 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.572 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.572 [2024-12-11 13:59:10.546004] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:17.572 [2024-12-11 13:59:10.546488] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:17.572 [2024-12-11 13:59:10.546509] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:17.572 [2024-12-11 13:59:10.546517] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:17.572 [2024-12-11 13:59:10.553940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:17.572 [2024-12-11 13:59:10.553964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:17.573 [2024-12-11 13:59:10.561856] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:17.573 [2024-12-11 13:59:10.562458] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:17.573 [2024-12-11 13:59:10.578862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:17.573 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.573 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:17.573 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:17.573 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:17.573 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.573 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.831 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.831 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:17.831 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:17.831 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.831 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.831 [2024-12-11 13:59:10.863981] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:17.831 [2024-12-11 13:59:10.864428] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:17.831 [2024-12-11 13:59:10.864445] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:17.831 [2024-12-11 13:59:10.864457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:17.831 [2024-12-11 13:59:10.873143] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:17.831 [2024-12-11 13:59:10.873174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:18.089 [2024-12-11 13:59:10.879860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:18.089 [2024-12-11 13:59:10.880427] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:18.089 [2024-12-11 13:59:10.883273] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:18.089 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.089 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:18.089 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:18.089 13:59:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:18.089 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.089 13:59:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.348 [2024-12-11 13:59:11.183006] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:18.348 [2024-12-11 13:59:11.183440] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:18.348 [2024-12-11 13:59:11.183460] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:18.348 [2024-12-11 13:59:11.183469] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:18.348 [2024-12-11 13:59:11.190925] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:18.348 [2024-12-11 13:59:11.190949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:18.348 [2024-12-11 13:59:11.198868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:18.348 [2024-12-11 13:59:11.199496] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:18.348 [2024-12-11 13:59:11.214881] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:18.348 { 00:18:18.348 "ublk_device": "/dev/ublkb0", 00:18:18.348 "id": 0, 00:18:18.348 "queue_depth": 512, 00:18:18.348 "num_queues": 4, 00:18:18.348 "bdev_name": "Malloc0" 00:18:18.348 }, 00:18:18.348 { 00:18:18.348 "ublk_device": "/dev/ublkb1", 00:18:18.348 "id": 1, 00:18:18.348 "queue_depth": 512, 00:18:18.348 "num_queues": 4, 00:18:18.348 "bdev_name": "Malloc1" 00:18:18.348 }, 00:18:18.348 { 00:18:18.348 "ublk_device": "/dev/ublkb2", 00:18:18.348 "id": 2, 00:18:18.348 "queue_depth": 512, 00:18:18.348 "num_queues": 4, 00:18:18.348 "bdev_name": "Malloc2" 00:18:18.348 }, 00:18:18.348 { 00:18:18.348 "ublk_device": "/dev/ublkb3", 00:18:18.348 "id": 3, 00:18:18.348 "queue_depth": 512, 00:18:18.348 "num_queues": 4, 00:18:18.348 "bdev_name": "Malloc3" 00:18:18.348 } 00:18:18.348 ]' 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:18.348 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:18.607 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:18.866 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:19.125 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:19.125 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:19.125 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:19.125 13:59:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.125 [2024-12-11 13:59:12.075008] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:19.125 [2024-12-11 13:59:12.108212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:19.125 [2024-12-11 13:59:12.109326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:19.125 [2024-12-11 13:59:12.117886] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:19.125 [2024-12-11 13:59:12.118172] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:19.125 [2024-12-11 13:59:12.118187] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.125 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.125 [2024-12-11 13:59:12.133970] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:19.125 [2024-12-11 13:59:12.168899] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:19.125 [2024-12-11 13:59:12.169753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:19.385 [2024-12-11 13:59:12.177900] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:19.385 [2024-12-11 13:59:12.178225] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:19.385 [2024-12-11 13:59:12.178244] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.385 [2024-12-11 13:59:12.192977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:19.385 [2024-12-11 13:59:12.225295] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:19.385 [2024-12-11 13:59:12.226296] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:19.385 [2024-12-11 13:59:12.235876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:19.385 [2024-12-11 13:59:12.236153] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:19.385 [2024-12-11 13:59:12.236167] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.385 [2024-12-11 13:59:12.251971] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:19.385 [2024-12-11 13:59:12.293901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:19.385 [2024-12-11 13:59:12.294657] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:19.385 [2024-12-11 13:59:12.299874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:19.385 [2024-12-11 13:59:12.300172] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:19.385 [2024-12-11 13:59:12.300188] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.385 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:19.644 [2024-12-11 13:59:12.497952] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:19.644 [2024-12-11 13:59:12.505846] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:19.644 [2024-12-11 13:59:12.505898] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:19.644 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:19.644 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:19.644 13:59:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:19.644 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.644 13:59:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:20.213 13:59:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.213 13:59:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:20.213 13:59:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:20.213 13:59:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.213 13:59:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:20.781 13:59:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.781 13:59:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:20.781 13:59:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:20.781 13:59:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.781 13:59:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:21.040 13:59:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.040 13:59:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:21.041 13:59:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:21.041 13:59:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.041 13:59:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:21.609 ************************************ 00:18:21.609 END TEST test_create_multi_ublk 00:18:21.609 ************************************ 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:21.609 00:18:21.609 real 0m4.563s 00:18:21.609 user 0m0.996s 00:18:21.609 sys 0m0.229s 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.609 13:59:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:21.609 13:59:14 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:21.609 13:59:14 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:21.609 13:59:14 ublk -- ublk/ublk.sh@130 -- # killprocess 76386 00:18:21.609 13:59:14 ublk -- common/autotest_common.sh@954 -- # '[' -z 76386 ']' 00:18:21.609 13:59:14 ublk -- common/autotest_common.sh@958 -- # kill -0 76386 00:18:21.609 13:59:14 ublk -- common/autotest_common.sh@959 -- # uname 00:18:21.609 13:59:14 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.609 13:59:14 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76386 00:18:21.609 killing process with pid 76386 00:18:21.609 13:59:14 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.609 13:59:14 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.609 13:59:14 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76386' 00:18:21.609 13:59:14 ublk -- common/autotest_common.sh@973 -- # kill 76386 00:18:21.609 13:59:14 ublk -- common/autotest_common.sh@978 -- # wait 76386 00:18:22.984 [2024-12-11 13:59:15.754959] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:22.984 [2024-12-11 13:59:15.755220] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:24.362 00:18:24.362 real 0m31.111s 00:18:24.362 user 0m44.319s 00:18:24.362 sys 0m10.762s 00:18:24.362 13:59:17 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.362 13:59:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.362 ************************************ 00:18:24.362 END TEST ublk 00:18:24.362 ************************************ 00:18:24.362 13:59:17 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:24.362 13:59:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:24.362 13:59:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.362 13:59:17 -- common/autotest_common.sh@10 -- # set +x 00:18:24.362 ************************************ 00:18:24.362 START TEST ublk_recovery 00:18:24.362 ************************************ 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:24.362 * Looking for test storage... 00:18:24.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.362 13:59:17 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:24.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.362 --rc genhtml_branch_coverage=1 00:18:24.362 --rc genhtml_function_coverage=1 00:18:24.362 --rc genhtml_legend=1 00:18:24.362 --rc geninfo_all_blocks=1 00:18:24.362 --rc geninfo_unexecuted_blocks=1 00:18:24.362 00:18:24.362 ' 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:24.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.362 --rc genhtml_branch_coverage=1 00:18:24.362 --rc genhtml_function_coverage=1 00:18:24.362 --rc genhtml_legend=1 00:18:24.362 --rc geninfo_all_blocks=1 00:18:24.362 --rc geninfo_unexecuted_blocks=1 00:18:24.362 00:18:24.362 ' 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:24.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.362 --rc genhtml_branch_coverage=1 00:18:24.362 --rc genhtml_function_coverage=1 00:18:24.362 --rc genhtml_legend=1 00:18:24.362 --rc geninfo_all_blocks=1 00:18:24.362 --rc geninfo_unexecuted_blocks=1 00:18:24.362 00:18:24.362 ' 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:24.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.362 --rc genhtml_branch_coverage=1 00:18:24.362 --rc genhtml_function_coverage=1 00:18:24.362 --rc genhtml_legend=1 00:18:24.362 --rc geninfo_all_blocks=1 00:18:24.362 --rc geninfo_unexecuted_blocks=1 00:18:24.362 00:18:24.362 ' 00:18:24.362 13:59:17 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:24.362 13:59:17 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:24.362 13:59:17 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:24.362 13:59:17 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:24.362 13:59:17 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:24.362 13:59:17 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:24.362 13:59:17 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:24.362 13:59:17 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:24.362 13:59:17 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:24.362 13:59:17 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:24.362 13:59:17 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76814 00:18:24.362 13:59:17 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:24.362 13:59:17 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.362 13:59:17 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76814 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76814 ']' 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.362 13:59:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.621 [2024-12-11 13:59:17.443756] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:24.621 [2024-12-11 13:59:17.444392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76814 ] 00:18:24.621 [2024-12-11 13:59:17.627954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:24.878 [2024-12-11 13:59:17.747083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.878 [2024-12-11 13:59:17.747113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:25.809 13:59:18 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.809 [2024-12-11 13:59:18.598846] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:25.809 [2024-12-11 13:59:18.601311] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.809 13:59:18 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.809 malloc0 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.809 13:59:18 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.809 [2024-12-11 13:59:18.751016] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:25.809 [2024-12-11 13:59:18.751134] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:25.809 [2024-12-11 13:59:18.751149] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:25.809 [2024-12-11 13:59:18.751158] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:25.809 [2024-12-11 13:59:18.759964] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:25.809 [2024-12-11 13:59:18.759990] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:25.809 [2024-12-11 13:59:18.766861] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:25.809 [2024-12-11 13:59:18.767006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:25.809 [2024-12-11 13:59:18.781856] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:25.809 1 00:18:25.809 13:59:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.809 13:59:18 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:27.183 13:59:19 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76858 00:18:27.183 13:59:19 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:27.183 13:59:19 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:27.183 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.183 fio-3.35 00:18:27.183 Starting 1 process 00:18:32.473 13:59:24 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76814 00:18:32.473 13:59:24 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:37.750 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76814 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:37.750 13:59:29 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76963 00:18:37.750 13:59:29 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:37.750 13:59:29 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.750 13:59:29 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76963 00:18:37.750 13:59:29 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76963 ']' 00:18:37.750 13:59:29 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.750 13:59:29 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.750 13:59:29 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.750 13:59:29 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.750 13:59:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.750 [2024-12-11 13:59:29.918175] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:37.750 [2024-12-11 13:59:29.918495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76963 ] 00:18:37.750 [2024-12-11 13:59:30.102622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:37.750 [2024-12-11 13:59:30.224380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.750 [2024-12-11 13:59:30.224413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:38.319 13:59:31 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.319 [2024-12-11 13:59:31.069848] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:38.319 [2024-12-11 13:59:31.072567] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.319 13:59:31 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.319 malloc0 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.319 13:59:31 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.319 [2024-12-11 13:59:31.229003] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:38.319 [2024-12-11 13:59:31.229053] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:38.319 [2024-12-11 13:59:31.229065] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:38.319 [2024-12-11 13:59:31.236874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:38.319 [2024-12-11 13:59:31.236902] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:18:38.319 [2024-12-11 13:59:31.236912] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:38.319 1 00:18:38.319 [2024-12-11 13:59:31.237005] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:38.319 13:59:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.319 13:59:31 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76858 00:18:38.319 [2024-12-11 13:59:31.244852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:38.319 [2024-12-11 13:59:31.251321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:38.319 [2024-12-11 13:59:31.259038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:38.319 [2024-12-11 13:59:31.259068] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:34.616 00:19:34.616 fio_test: (groupid=0, jobs=1): err= 0: pid=76861: Wed Dec 11 14:00:20 2024 00:19:34.616 read: IOPS=21.3k, BW=83.2MiB/s (87.2MB/s)(4989MiB/60003msec) 00:19:34.616 slat (usec): min=2, max=1025, avg= 7.63, stdev= 3.15 00:19:34.616 clat (usec): min=1060, max=6469.6k, avg=2935.20, stdev=44315.81 00:19:34.616 lat (usec): min=1067, max=6469.6k, avg=2942.83, stdev=44315.81 00:19:34.616 clat percentiles (usec): 00:19:34.616 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2278], 00:19:34.616 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:19:34.616 | 70.00th=[ 2638], 80.00th=[ 3064], 90.00th=[ 3195], 95.00th=[ 3752], 00:19:34.616 | 99.00th=[ 5080], 99.50th=[ 5538], 99.90th=[ 7046], 99.95th=[ 7635], 00:19:34.616 | 99.99th=[13173] 00:19:34.616 bw ( KiB/s): min=26568, max=104976, per=100.00%, avg=94828.69, stdev=13141.35, samples=107 00:19:34.617 iops : min= 6642, max=26244, avg=23707.17, stdev=3285.33, samples=107 00:19:34.617 write: IOPS=21.3k, BW=83.1MiB/s (87.1MB/s)(4985MiB/60003msec); 0 zone resets 00:19:34.617 slat (usec): min=2, max=1070, avg= 7.65, stdev= 2.91 00:19:34.617 clat (usec): min=1107, max=6470.3k, avg=3063.68, stdev=47198.88 00:19:34.617 lat (usec): min=1115, max=6470.3k, avg=3071.33, stdev=47198.87 00:19:34.617 clat percentiles (usec): 00:19:34.617 | 1.00th=[ 1991], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2376], 00:19:34.617 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2540], 00:19:34.617 | 70.00th=[ 2671], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 3752], 00:19:34.617 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 7177], 99.95th=[ 7898], 00:19:34.617 | 99.99th=[13435] 00:19:34.617 bw ( KiB/s): min=27480, max=104128, per=100.00%, avg=94762.25, stdev=13033.10, samples=107 00:19:34.617 iops : min= 6870, max=26032, avg=23690.55, stdev=3258.27, samples=107 00:19:34.617 lat (msec) : 2=1.03%, 4=95.09%, 10=3.85%, 20=0.01%, >=2000=0.01% 00:19:34.617 cpu : usr=12.04%, sys=32.26%, ctx=111990, majf=0, minf=13 00:19:34.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:34.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.617 issued rwts: total=1277276,1276166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.617 00:19:34.617 Run status group 0 (all jobs): 00:19:34.617 READ: bw=83.2MiB/s (87.2MB/s), 83.2MiB/s-83.2MiB/s (87.2MB/s-87.2MB/s), io=4989MiB (5232MB), run=60003-60003msec 00:19:34.617 WRITE: bw=83.1MiB/s (87.1MB/s), 83.1MiB/s-83.1MiB/s (87.1MB/s-87.1MB/s), io=4985MiB (5227MB), run=60003-60003msec 00:19:34.617 00:19:34.617 Disk stats (read/write): 00:19:34.617 ublkb1: ios=1274956/1273952, merge=0/0, ticks=3637966/3662893, in_queue=7300859, util=99.93% 00:19:34.617 14:00:20 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.617 [2024-12-11 14:00:20.067065] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:34.617 [2024-12-11 14:00:20.108024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:34.617 [2024-12-11 14:00:20.108362] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:34.617 [2024-12-11 14:00:20.116930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:34.617 [2024-12-11 14:00:20.117195] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:34.617 [2024-12-11 14:00:20.120843] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.617 14:00:20 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.617 [2024-12-11 14:00:20.130994] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:34.617 [2024-12-11 14:00:20.139146] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:34.617 [2024-12-11 14:00:20.139192] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.617 14:00:20 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:34.617 14:00:20 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:34.617 14:00:20 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76963 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76963 ']' 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76963 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76963 00:19:34.617 killing process with pid 76963 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76963' 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76963 00:19:34.617 14:00:20 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76963 00:19:34.617 [2024-12-11 14:00:21.785871] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:34.617 [2024-12-11 14:00:21.786185] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:34.617 00:19:34.617 real 1m6.125s 00:19:34.617 user 1m50.950s 00:19:34.617 sys 0m36.926s 00:19:34.617 14:00:23 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.617 ************************************ 00:19:34.617 END TEST ublk_recovery 00:19:34.617 ************************************ 00:19:34.617 14:00:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.617 14:00:23 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:34.617 14:00:23 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:34.617 14:00:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.617 14:00:23 -- common/autotest_common.sh@10 -- # set +x 00:19:34.617 14:00:23 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:34.617 14:00:23 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:34.617 14:00:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:34.617 14:00:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.617 14:00:23 -- common/autotest_common.sh@10 -- # set +x 00:19:34.617 ************************************ 00:19:34.617 START TEST ftl 00:19:34.617 ************************************ 00:19:34.617 14:00:23 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:34.617 * Looking for test storage... 00:19:34.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.617 14:00:23 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:34.617 14:00:23 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:19:34.617 14:00:23 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:34.617 14:00:23 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:34.617 14:00:23 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.617 14:00:23 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.617 14:00:23 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.617 14:00:23 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.617 14:00:23 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.617 14:00:23 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.617 14:00:23 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.617 14:00:23 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.617 14:00:23 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.617 14:00:23 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.617 14:00:23 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.617 14:00:23 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:34.617 14:00:23 ftl -- scripts/common.sh@345 -- # : 1 00:19:34.617 14:00:23 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.617 14:00:23 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.617 14:00:23 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:34.617 14:00:23 ftl -- scripts/common.sh@353 -- # local d=1 00:19:34.617 14:00:23 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.617 14:00:23 ftl -- scripts/common.sh@355 -- # echo 1 00:19:34.617 14:00:23 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.617 14:00:23 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:34.617 14:00:23 ftl -- scripts/common.sh@353 -- # local d=2 00:19:34.617 14:00:23 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.617 14:00:23 ftl -- scripts/common.sh@355 -- # echo 2 00:19:34.617 14:00:23 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.617 14:00:23 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.617 14:00:23 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.617 14:00:23 ftl -- scripts/common.sh@368 -- # return 0 00:19:34.617 14:00:23 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.617 14:00:23 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:34.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.617 --rc genhtml_branch_coverage=1 00:19:34.617 --rc genhtml_function_coverage=1 00:19:34.617 --rc genhtml_legend=1 00:19:34.617 --rc geninfo_all_blocks=1 00:19:34.617 --rc geninfo_unexecuted_blocks=1 00:19:34.617 00:19:34.617 ' 00:19:34.617 14:00:23 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:34.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.617 --rc genhtml_branch_coverage=1 00:19:34.617 --rc genhtml_function_coverage=1 00:19:34.617 --rc genhtml_legend=1 00:19:34.617 --rc geninfo_all_blocks=1 00:19:34.617 --rc geninfo_unexecuted_blocks=1 00:19:34.617 00:19:34.617 ' 00:19:34.617 14:00:23 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:34.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.617 --rc genhtml_branch_coverage=1 00:19:34.617 --rc genhtml_function_coverage=1 00:19:34.617 --rc genhtml_legend=1 00:19:34.617 --rc geninfo_all_blocks=1 00:19:34.617 --rc geninfo_unexecuted_blocks=1 00:19:34.617 00:19:34.617 ' 00:19:34.617 14:00:23 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:34.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.617 --rc genhtml_branch_coverage=1 00:19:34.617 --rc genhtml_function_coverage=1 00:19:34.617 --rc genhtml_legend=1 00:19:34.617 --rc geninfo_all_blocks=1 00:19:34.617 --rc geninfo_unexecuted_blocks=1 00:19:34.617 00:19:34.617 ' 00:19:34.617 14:00:23 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:34.617 14:00:23 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:34.617 14:00:23 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.617 14:00:23 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.618 14:00:23 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:34.618 14:00:23 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:34.618 14:00:23 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.618 14:00:23 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:34.618 14:00:23 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:34.618 14:00:23 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.618 14:00:23 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.618 14:00:23 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:34.618 14:00:23 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:34.618 14:00:23 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:34.618 14:00:23 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:34.618 14:00:23 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:34.618 14:00:23 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:34.618 14:00:23 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.618 14:00:23 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.618 14:00:23 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:34.618 14:00:23 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:34.618 14:00:23 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:34.618 14:00:23 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:34.618 14:00:23 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:34.618 14:00:23 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:34.618 14:00:23 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:34.618 14:00:23 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:34.618 14:00:23 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.618 14:00:23 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.618 14:00:23 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.618 14:00:23 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:34.618 14:00:23 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:34.618 14:00:23 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:34.618 14:00:23 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:34.618 14:00:23 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:34.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:34.618 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:34.618 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:34.618 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:34.618 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:34.618 14:00:24 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77774 00:19:34.618 14:00:24 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:34.618 14:00:24 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77774 00:19:34.618 14:00:24 ftl -- common/autotest_common.sh@835 -- # '[' -z 77774 ']' 00:19:34.618 14:00:24 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.618 14:00:24 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.618 14:00:24 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.618 14:00:24 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.618 14:00:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:34.618 [2024-12-11 14:00:24.584461] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:34.618 [2024-12-11 14:00:24.584598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77774 ] 00:19:34.618 [2024-12-11 14:00:24.767174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.618 [2024-12-11 14:00:24.878355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.618 14:00:25 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.618 14:00:25 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:34.618 14:00:25 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:34.618 14:00:25 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:34.618 14:00:26 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:34.618 14:00:26 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@50 -- # break 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@63 -- # break 00:19:34.618 14:00:27 ftl -- ftl/ftl.sh@66 -- # killprocess 77774 00:19:34.618 14:00:27 ftl -- common/autotest_common.sh@954 -- # '[' -z 77774 ']' 00:19:34.618 14:00:27 ftl -- common/autotest_common.sh@958 -- # kill -0 77774 00:19:34.618 14:00:27 ftl -- common/autotest_common.sh@959 -- # uname 00:19:34.618 14:00:27 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.618 14:00:27 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77774 00:19:34.618 14:00:27 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.618 14:00:27 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.618 killing process with pid 77774 00:19:34.618 14:00:27 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77774' 00:19:34.618 14:00:27 ftl -- common/autotest_common.sh@973 -- # kill 77774 00:19:34.618 14:00:27 ftl -- common/autotest_common.sh@978 -- # wait 77774 00:19:37.152 14:00:29 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:37.152 14:00:29 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:37.152 14:00:29 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:37.152 14:00:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.152 14:00:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:37.152 ************************************ 00:19:37.152 START TEST ftl_fio_basic 00:19:37.152 ************************************ 00:19:37.152 14:00:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:37.152 * Looking for test storage... 00:19:37.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:37.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.152 --rc genhtml_branch_coverage=1 00:19:37.152 --rc genhtml_function_coverage=1 00:19:37.152 --rc genhtml_legend=1 00:19:37.152 --rc geninfo_all_blocks=1 00:19:37.152 --rc geninfo_unexecuted_blocks=1 00:19:37.152 00:19:37.152 ' 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:37.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.152 --rc genhtml_branch_coverage=1 00:19:37.152 --rc genhtml_function_coverage=1 00:19:37.152 --rc genhtml_legend=1 00:19:37.152 --rc geninfo_all_blocks=1 00:19:37.152 --rc geninfo_unexecuted_blocks=1 00:19:37.152 00:19:37.152 ' 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:37.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.152 --rc genhtml_branch_coverage=1 00:19:37.152 --rc genhtml_function_coverage=1 00:19:37.152 --rc genhtml_legend=1 00:19:37.152 --rc geninfo_all_blocks=1 00:19:37.152 --rc geninfo_unexecuted_blocks=1 00:19:37.152 00:19:37.152 ' 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:37.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.152 --rc genhtml_branch_coverage=1 00:19:37.152 --rc genhtml_function_coverage=1 00:19:37.152 --rc genhtml_legend=1 00:19:37.152 --rc geninfo_all_blocks=1 00:19:37.152 --rc geninfo_unexecuted_blocks=1 00:19:37.152 00:19:37.152 ' 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.152 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77923 00:19:37.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77923 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77923 ']' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.411 14:00:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:37.411 [2024-12-11 14:00:30.326640] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:37.411 [2024-12-11 14:00:30.326779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77923 ] 00:19:37.670 [2024-12-11 14:00:30.510612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:37.670 [2024-12-11 14:00:30.619722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.670 [2024-12-11 14:00:30.619822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.670 [2024-12-11 14:00:30.619870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.607 14:00:31 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.607 14:00:31 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:38.607 14:00:31 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:38.607 14:00:31 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:38.607 14:00:31 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:38.607 14:00:31 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:38.607 14:00:31 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:38.607 14:00:31 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:38.865 14:00:31 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:38.865 14:00:31 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:38.865 14:00:31 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:38.865 14:00:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:38.865 14:00:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:38.865 14:00:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:38.865 14:00:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:38.865 14:00:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:39.124 { 00:19:39.124 "name": "nvme0n1", 00:19:39.124 "aliases": [ 00:19:39.124 "ac4935ca-48c1-41d2-87f8-6c76c9f62a72" 00:19:39.124 ], 00:19:39.124 "product_name": "NVMe disk", 00:19:39.124 "block_size": 4096, 00:19:39.124 "num_blocks": 1310720, 00:19:39.124 "uuid": "ac4935ca-48c1-41d2-87f8-6c76c9f62a72", 00:19:39.124 "numa_id": -1, 00:19:39.124 "assigned_rate_limits": { 00:19:39.124 "rw_ios_per_sec": 0, 00:19:39.124 "rw_mbytes_per_sec": 0, 00:19:39.124 "r_mbytes_per_sec": 0, 00:19:39.124 "w_mbytes_per_sec": 0 00:19:39.124 }, 00:19:39.124 "claimed": false, 00:19:39.124 "zoned": false, 00:19:39.124 "supported_io_types": { 00:19:39.124 "read": true, 00:19:39.124 "write": true, 00:19:39.124 "unmap": true, 00:19:39.124 "flush": true, 00:19:39.124 "reset": true, 00:19:39.124 "nvme_admin": true, 00:19:39.124 "nvme_io": true, 00:19:39.124 "nvme_io_md": false, 00:19:39.124 "write_zeroes": true, 00:19:39.124 "zcopy": false, 00:19:39.124 "get_zone_info": false, 00:19:39.124 "zone_management": false, 00:19:39.124 "zone_append": false, 00:19:39.124 "compare": true, 00:19:39.124 "compare_and_write": false, 00:19:39.124 "abort": true, 00:19:39.124 "seek_hole": false, 00:19:39.124 "seek_data": false, 00:19:39.124 "copy": true, 00:19:39.124 "nvme_iov_md": false 00:19:39.124 }, 00:19:39.124 "driver_specific": { 00:19:39.124 "nvme": [ 00:19:39.124 { 00:19:39.124 "pci_address": "0000:00:11.0", 00:19:39.124 "trid": { 00:19:39.124 "trtype": "PCIe", 00:19:39.124 "traddr": "0000:00:11.0" 00:19:39.124 }, 00:19:39.124 "ctrlr_data": { 00:19:39.124 "cntlid": 0, 00:19:39.124 "vendor_id": "0x1b36", 00:19:39.124 "model_number": "QEMU NVMe Ctrl", 00:19:39.124 "serial_number": "12341", 00:19:39.124 "firmware_revision": "8.0.0", 00:19:39.124 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:39.124 "oacs": { 00:19:39.124 "security": 0, 00:19:39.124 "format": 1, 00:19:39.124 "firmware": 0, 00:19:39.124 "ns_manage": 1 00:19:39.124 }, 00:19:39.124 "multi_ctrlr": false, 00:19:39.124 "ana_reporting": false 00:19:39.124 }, 00:19:39.124 "vs": { 00:19:39.124 "nvme_version": "1.4" 00:19:39.124 }, 00:19:39.124 "ns_data": { 00:19:39.124 "id": 1, 00:19:39.124 "can_share": false 00:19:39.124 } 00:19:39.124 } 00:19:39.124 ], 00:19:39.124 "mp_policy": "active_passive" 00:19:39.124 } 00:19:39.124 } 00:19:39.124 ]' 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:39.124 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:39.381 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:39.381 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:39.639 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e236fe0f-2fb9-4519-bd16-95e17a042f46 00:19:39.639 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e236fe0f-2fb9-4519-bd16-95e17a042f46 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=93451809-4c6d-4a43-b309-f4c207d89a08 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 93451809-4c6d-4a43-b309-f4c207d89a08 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=93451809-4c6d-4a43-b309-f4c207d89a08 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 93451809-4c6d-4a43-b309-f4c207d89a08 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=93451809-4c6d-4a43-b309-f4c207d89a08 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:39.898 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93451809-4c6d-4a43-b309-f4c207d89a08 00:19:40.156 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:40.156 { 00:19:40.156 "name": "93451809-4c6d-4a43-b309-f4c207d89a08", 00:19:40.156 "aliases": [ 00:19:40.157 "lvs/nvme0n1p0" 00:19:40.157 ], 00:19:40.157 "product_name": "Logical Volume", 00:19:40.157 "block_size": 4096, 00:19:40.157 "num_blocks": 26476544, 00:19:40.157 "uuid": "93451809-4c6d-4a43-b309-f4c207d89a08", 00:19:40.157 "assigned_rate_limits": { 00:19:40.157 "rw_ios_per_sec": 0, 00:19:40.157 "rw_mbytes_per_sec": 0, 00:19:40.157 "r_mbytes_per_sec": 0, 00:19:40.157 "w_mbytes_per_sec": 0 00:19:40.157 }, 00:19:40.157 "claimed": false, 00:19:40.157 "zoned": false, 00:19:40.157 "supported_io_types": { 00:19:40.157 "read": true, 00:19:40.157 "write": true, 00:19:40.157 "unmap": true, 00:19:40.157 "flush": false, 00:19:40.157 "reset": true, 00:19:40.157 "nvme_admin": false, 00:19:40.157 "nvme_io": false, 00:19:40.157 "nvme_io_md": false, 00:19:40.157 "write_zeroes": true, 00:19:40.157 "zcopy": false, 00:19:40.157 "get_zone_info": false, 00:19:40.157 "zone_management": false, 00:19:40.157 "zone_append": false, 00:19:40.157 "compare": false, 00:19:40.157 "compare_and_write": false, 00:19:40.157 "abort": false, 00:19:40.157 "seek_hole": true, 00:19:40.157 "seek_data": true, 00:19:40.157 "copy": false, 00:19:40.157 "nvme_iov_md": false 00:19:40.157 }, 00:19:40.157 "driver_specific": { 00:19:40.157 "lvol": { 00:19:40.157 "lvol_store_uuid": "e236fe0f-2fb9-4519-bd16-95e17a042f46", 00:19:40.157 "base_bdev": "nvme0n1", 00:19:40.157 "thin_provision": true, 00:19:40.157 "num_allocated_clusters": 0, 00:19:40.157 "snapshot": false, 00:19:40.157 "clone": false, 00:19:40.157 "esnap_clone": false 00:19:40.157 } 00:19:40.157 } 00:19:40.157 } 00:19:40.157 ]' 00:19:40.157 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:40.157 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:40.157 14:00:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:40.157 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:40.157 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:40.157 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:40.157 14:00:33 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:40.157 14:00:33 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:40.157 14:00:33 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:40.415 14:00:33 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:40.415 14:00:33 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:40.415 14:00:33 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 93451809-4c6d-4a43-b309-f4c207d89a08 00:19:40.415 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=93451809-4c6d-4a43-b309-f4c207d89a08 00:19:40.415 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:40.415 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:40.415 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:40.415 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93451809-4c6d-4a43-b309-f4c207d89a08 00:19:40.674 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:40.674 { 00:19:40.674 "name": "93451809-4c6d-4a43-b309-f4c207d89a08", 00:19:40.674 "aliases": [ 00:19:40.674 "lvs/nvme0n1p0" 00:19:40.674 ], 00:19:40.674 "product_name": "Logical Volume", 00:19:40.674 "block_size": 4096, 00:19:40.674 "num_blocks": 26476544, 00:19:40.674 "uuid": "93451809-4c6d-4a43-b309-f4c207d89a08", 00:19:40.674 "assigned_rate_limits": { 00:19:40.674 "rw_ios_per_sec": 0, 00:19:40.674 "rw_mbytes_per_sec": 0, 00:19:40.674 "r_mbytes_per_sec": 0, 00:19:40.674 "w_mbytes_per_sec": 0 00:19:40.674 }, 00:19:40.674 "claimed": false, 00:19:40.674 "zoned": false, 00:19:40.674 "supported_io_types": { 00:19:40.674 "read": true, 00:19:40.674 "write": true, 00:19:40.674 "unmap": true, 00:19:40.674 "flush": false, 00:19:40.674 "reset": true, 00:19:40.674 "nvme_admin": false, 00:19:40.674 "nvme_io": false, 00:19:40.674 "nvme_io_md": false, 00:19:40.674 "write_zeroes": true, 00:19:40.674 "zcopy": false, 00:19:40.674 "get_zone_info": false, 00:19:40.674 "zone_management": false, 00:19:40.674 "zone_append": false, 00:19:40.674 "compare": false, 00:19:40.674 "compare_and_write": false, 00:19:40.674 "abort": false, 00:19:40.674 "seek_hole": true, 00:19:40.674 "seek_data": true, 00:19:40.674 "copy": false, 00:19:40.674 "nvme_iov_md": false 00:19:40.674 }, 00:19:40.674 "driver_specific": { 00:19:40.674 "lvol": { 00:19:40.674 "lvol_store_uuid": "e236fe0f-2fb9-4519-bd16-95e17a042f46", 00:19:40.674 "base_bdev": "nvme0n1", 00:19:40.674 "thin_provision": true, 00:19:40.674 "num_allocated_clusters": 0, 00:19:40.674 "snapshot": false, 00:19:40.674 "clone": false, 00:19:40.674 "esnap_clone": false 00:19:40.674 } 00:19:40.674 } 00:19:40.674 } 00:19:40.674 ]' 00:19:40.674 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:40.674 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:40.674 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:40.674 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:40.674 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:40.674 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:40.674 14:00:33 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:40.674 14:00:33 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:40.932 14:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:40.933 14:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:40.933 14:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:40.933 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:40.933 14:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 93451809-4c6d-4a43-b309-f4c207d89a08 00:19:40.933 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=93451809-4c6d-4a43-b309-f4c207d89a08 00:19:40.933 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:40.933 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:40.933 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:40.933 14:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93451809-4c6d-4a43-b309-f4c207d89a08 00:19:41.192 14:00:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:41.192 { 00:19:41.192 "name": "93451809-4c6d-4a43-b309-f4c207d89a08", 00:19:41.192 "aliases": [ 00:19:41.192 "lvs/nvme0n1p0" 00:19:41.192 ], 00:19:41.192 "product_name": "Logical Volume", 00:19:41.192 "block_size": 4096, 00:19:41.192 "num_blocks": 26476544, 00:19:41.192 "uuid": "93451809-4c6d-4a43-b309-f4c207d89a08", 00:19:41.192 "assigned_rate_limits": { 00:19:41.192 "rw_ios_per_sec": 0, 00:19:41.192 "rw_mbytes_per_sec": 0, 00:19:41.192 "r_mbytes_per_sec": 0, 00:19:41.192 "w_mbytes_per_sec": 0 00:19:41.192 }, 00:19:41.192 "claimed": false, 00:19:41.192 "zoned": false, 00:19:41.192 "supported_io_types": { 00:19:41.192 "read": true, 00:19:41.192 "write": true, 00:19:41.192 "unmap": true, 00:19:41.192 "flush": false, 00:19:41.192 "reset": true, 00:19:41.192 "nvme_admin": false, 00:19:41.192 "nvme_io": false, 00:19:41.192 "nvme_io_md": false, 00:19:41.192 "write_zeroes": true, 00:19:41.192 "zcopy": false, 00:19:41.192 "get_zone_info": false, 00:19:41.192 "zone_management": false, 00:19:41.192 "zone_append": false, 00:19:41.192 "compare": false, 00:19:41.192 "compare_and_write": false, 00:19:41.192 "abort": false, 00:19:41.192 "seek_hole": true, 00:19:41.192 "seek_data": true, 00:19:41.192 "copy": false, 00:19:41.192 "nvme_iov_md": false 00:19:41.192 }, 00:19:41.192 "driver_specific": { 00:19:41.192 "lvol": { 00:19:41.192 "lvol_store_uuid": "e236fe0f-2fb9-4519-bd16-95e17a042f46", 00:19:41.192 "base_bdev": "nvme0n1", 00:19:41.192 "thin_provision": true, 00:19:41.192 "num_allocated_clusters": 0, 00:19:41.192 "snapshot": false, 00:19:41.192 "clone": false, 00:19:41.192 "esnap_clone": false 00:19:41.192 } 00:19:41.192 } 00:19:41.192 } 00:19:41.192 ]' 00:19:41.192 14:00:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:41.192 14:00:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:41.192 14:00:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:41.192 14:00:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:41.192 14:00:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:41.192 14:00:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:41.192 14:00:34 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:41.192 14:00:34 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:41.192 14:00:34 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 93451809-4c6d-4a43-b309-f4c207d89a08 -c nvc0n1p0 --l2p_dram_limit 60 00:19:41.452 [2024-12-11 14:00:34.327247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.327304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:41.452 [2024-12-11 14:00:34.327339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:41.452 [2024-12-11 14:00:34.327350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.327455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.327473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.452 [2024-12-11 14:00:34.327490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:41.452 [2024-12-11 14:00:34.327501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.327568] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:41.452 [2024-12-11 14:00:34.328628] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:41.452 [2024-12-11 14:00:34.328669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.328681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.452 [2024-12-11 14:00:34.328696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:19:41.452 [2024-12-11 14:00:34.328707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.328813] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c7227735-778b-47c5-8b5e-88e4a2abe14b 00:19:41.452 [2024-12-11 14:00:34.330326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.330372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:41.452 [2024-12-11 14:00:34.330384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:41.452 [2024-12-11 14:00:34.330397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.338077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.338111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.452 [2024-12-11 14:00:34.338124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.577 ms 00:19:41.452 [2024-12-11 14:00:34.338138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.338286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.338304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.452 [2024-12-11 14:00:34.338316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:41.452 [2024-12-11 14:00:34.338333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.338432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.338448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:41.452 [2024-12-11 14:00:34.338459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:41.452 [2024-12-11 14:00:34.338471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.338533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:41.452 [2024-12-11 14:00:34.343704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.343739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.452 [2024-12-11 14:00:34.343755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.187 ms 00:19:41.452 [2024-12-11 14:00:34.343769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.343850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.343863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:41.452 [2024-12-11 14:00:34.343878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:41.452 [2024-12-11 14:00:34.343888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.343954] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:41.452 [2024-12-11 14:00:34.344107] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:41.452 [2024-12-11 14:00:34.344130] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:41.452 [2024-12-11 14:00:34.344144] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:41.452 [2024-12-11 14:00:34.344162] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:41.452 [2024-12-11 14:00:34.344174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:41.452 [2024-12-11 14:00:34.344188] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:41.452 [2024-12-11 14:00:34.344198] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:41.452 [2024-12-11 14:00:34.344211] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:41.452 [2024-12-11 14:00:34.344221] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:41.452 [2024-12-11 14:00:34.344234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.344247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:41.452 [2024-12-11 14:00:34.344261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:41.452 [2024-12-11 14:00:34.344271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.344372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.452 [2024-12-11 14:00:34.344391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:41.452 [2024-12-11 14:00:34.344404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:41.452 [2024-12-11 14:00:34.344414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.452 [2024-12-11 14:00:34.344552] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:41.452 [2024-12-11 14:00:34.344564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:41.452 [2024-12-11 14:00:34.344581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.452 [2024-12-11 14:00:34.344591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.452 [2024-12-11 14:00:34.344605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:41.452 [2024-12-11 14:00:34.344614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:41.452 [2024-12-11 14:00:34.344632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:41.452 [2024-12-11 14:00:34.344641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:41.452 [2024-12-11 14:00:34.344653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:41.452 [2024-12-11 14:00:34.344663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.452 [2024-12-11 14:00:34.344675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:41.453 [2024-12-11 14:00:34.344684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:41.453 [2024-12-11 14:00:34.344696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.453 [2024-12-11 14:00:34.344706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:41.453 [2024-12-11 14:00:34.344718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:41.453 [2024-12-11 14:00:34.344727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.453 [2024-12-11 14:00:34.344741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:41.453 [2024-12-11 14:00:34.344751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:41.453 [2024-12-11 14:00:34.344763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.453 [2024-12-11 14:00:34.344772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:41.453 [2024-12-11 14:00:34.344785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:41.453 [2024-12-11 14:00:34.344794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.453 [2024-12-11 14:00:34.344806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:41.453 [2024-12-11 14:00:34.344816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:41.453 [2024-12-11 14:00:34.344842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.453 [2024-12-11 14:00:34.344852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:41.453 [2024-12-11 14:00:34.344863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:41.453 [2024-12-11 14:00:34.344873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.453 [2024-12-11 14:00:34.344885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:41.453 [2024-12-11 14:00:34.344894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:41.453 [2024-12-11 14:00:34.344906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.453 [2024-12-11 14:00:34.344915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:41.453 [2024-12-11 14:00:34.344931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:41.453 [2024-12-11 14:00:34.344955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.453 [2024-12-11 14:00:34.344968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:41.453 [2024-12-11 14:00:34.344977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:41.453 [2024-12-11 14:00:34.344990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.453 [2024-12-11 14:00:34.344999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:41.453 [2024-12-11 14:00:34.345012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:41.453 [2024-12-11 14:00:34.345022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.453 [2024-12-11 14:00:34.345033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:41.453 [2024-12-11 14:00:34.345043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:41.453 [2024-12-11 14:00:34.345054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.453 [2024-12-11 14:00:34.345063] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:41.453 [2024-12-11 14:00:34.345077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:41.453 [2024-12-11 14:00:34.345088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.453 [2024-12-11 14:00:34.345100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.453 [2024-12-11 14:00:34.345111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:41.453 [2024-12-11 14:00:34.345126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:41.453 [2024-12-11 14:00:34.345136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:41.453 [2024-12-11 14:00:34.345148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:41.453 [2024-12-11 14:00:34.345158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:41.453 [2024-12-11 14:00:34.345170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:41.453 [2024-12-11 14:00:34.345182] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:41.453 [2024-12-11 14:00:34.345197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.453 [2024-12-11 14:00:34.345209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:41.453 [2024-12-11 14:00:34.345223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:41.453 [2024-12-11 14:00:34.345233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:41.453 [2024-12-11 14:00:34.345248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:41.453 [2024-12-11 14:00:34.345258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:41.453 [2024-12-11 14:00:34.345272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:41.453 [2024-12-11 14:00:34.345282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:41.453 [2024-12-11 14:00:34.345295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:41.453 [2024-12-11 14:00:34.345305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:41.453 [2024-12-11 14:00:34.345321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:41.453 [2024-12-11 14:00:34.345331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:41.453 [2024-12-11 14:00:34.345344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:41.453 [2024-12-11 14:00:34.345354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:41.453 [2024-12-11 14:00:34.345367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:41.453 [2024-12-11 14:00:34.345377] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:41.453 [2024-12-11 14:00:34.345392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.453 [2024-12-11 14:00:34.345407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:41.453 [2024-12-11 14:00:34.345420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:41.453 [2024-12-11 14:00:34.345431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:41.453 [2024-12-11 14:00:34.345444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:41.453 [2024-12-11 14:00:34.345455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.453 [2024-12-11 14:00:34.345468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:41.453 [2024-12-11 14:00:34.345479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:19:41.453 [2024-12-11 14:00:34.345492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.453 [2024-12-11 14:00:34.345609] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:41.453 [2024-12-11 14:00:34.345636] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:48.021 [2024-12-11 14:00:40.010520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.021 [2024-12-11 14:00:40.010599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:48.021 [2024-12-11 14:00:40.010616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5674.105 ms 00:19:48.021 [2024-12-11 14:00:40.010630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.021 [2024-12-11 14:00:40.049635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.021 [2024-12-11 14:00:40.049704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.021 [2024-12-11 14:00:40.049722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.619 ms 00:19:48.021 [2024-12-11 14:00:40.049736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.021 [2024-12-11 14:00:40.049977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.021 [2024-12-11 14:00:40.050001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:48.021 [2024-12-11 14:00:40.050014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:48.021 [2024-12-11 14:00:40.050039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.021 [2024-12-11 14:00:40.108094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.021 [2024-12-11 14:00:40.108149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.021 [2024-12-11 14:00:40.108167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.063 ms 00:19:48.021 [2024-12-11 14:00:40.108198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.021 [2024-12-11 14:00:40.108265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.021 [2024-12-11 14:00:40.108280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.021 [2024-12-11 14:00:40.108291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:48.021 [2024-12-11 14:00:40.108303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.021 [2024-12-11 14:00:40.108808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.021 [2024-12-11 14:00:40.108846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.021 [2024-12-11 14:00:40.108857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:19:48.021 [2024-12-11 14:00:40.108873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.021 [2024-12-11 14:00:40.109022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.021 [2024-12-11 14:00:40.109039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.021 [2024-12-11 14:00:40.109050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:48.021 [2024-12-11 14:00:40.109066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.021 [2024-12-11 14:00:40.130410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.021 [2024-12-11 14:00:40.130479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.021 [2024-12-11 14:00:40.130494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.332 ms 00:19:48.021 [2024-12-11 14:00:40.130508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.143266] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:48.022 [2024-12-11 14:00:40.160099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.160155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:48.022 [2024-12-11 14:00:40.160173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.464 ms 00:19:48.022 [2024-12-11 14:00:40.160203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.294637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.294699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:48.022 [2024-12-11 14:00:40.294722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 134.575 ms 00:19:48.022 [2024-12-11 14:00:40.294734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.294982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.294996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:48.022 [2024-12-11 14:00:40.295015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:19:48.022 [2024-12-11 14:00:40.295025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.332488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.332534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:48.022 [2024-12-11 14:00:40.332551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.427 ms 00:19:48.022 [2024-12-11 14:00:40.332562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.368751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.368790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:48.022 [2024-12-11 14:00:40.368808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.156 ms 00:19:48.022 [2024-12-11 14:00:40.368818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.369635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.369664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:48.022 [2024-12-11 14:00:40.369678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:19:48.022 [2024-12-11 14:00:40.369688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.496785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.496860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:48.022 [2024-12-11 14:00:40.496882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 127.208 ms 00:19:48.022 [2024-12-11 14:00:40.496897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.535992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.536037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:48.022 [2024-12-11 14:00:40.536055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.025 ms 00:19:48.022 [2024-12-11 14:00:40.536066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.573079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.573121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:48.022 [2024-12-11 14:00:40.573139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.993 ms 00:19:48.022 [2024-12-11 14:00:40.573149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.609987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.610028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:48.022 [2024-12-11 14:00:40.610063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.831 ms 00:19:48.022 [2024-12-11 14:00:40.610074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.610162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.610174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:48.022 [2024-12-11 14:00:40.610195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:48.022 [2024-12-11 14:00:40.610206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.610343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.022 [2024-12-11 14:00:40.610356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:48.022 [2024-12-11 14:00:40.610370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:48.022 [2024-12-11 14:00:40.610381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.022 [2024-12-11 14:00:40.611566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6294.099 ms, result 0 00:19:48.022 { 00:19:48.022 "name": "ftl0", 00:19:48.022 "uuid": "c7227735-778b-47c5-8b5e-88e4a2abe14b" 00:19:48.022 } 00:19:48.022 14:00:40 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:48.022 14:00:40 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:48.022 14:00:40 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:48.022 14:00:40 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:19:48.022 14:00:40 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:48.022 14:00:40 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:48.022 14:00:40 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:48.022 14:00:40 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:48.022 [ 00:19:48.022 { 00:19:48.022 "name": "ftl0", 00:19:48.022 "aliases": [ 00:19:48.022 "c7227735-778b-47c5-8b5e-88e4a2abe14b" 00:19:48.022 ], 00:19:48.022 "product_name": "FTL disk", 00:19:48.022 "block_size": 4096, 00:19:48.022 "num_blocks": 20971520, 00:19:48.022 "uuid": "c7227735-778b-47c5-8b5e-88e4a2abe14b", 00:19:48.022 "assigned_rate_limits": { 00:19:48.022 "rw_ios_per_sec": 0, 00:19:48.022 "rw_mbytes_per_sec": 0, 00:19:48.022 "r_mbytes_per_sec": 0, 00:19:48.022 "w_mbytes_per_sec": 0 00:19:48.022 }, 00:19:48.022 "claimed": false, 00:19:48.022 "zoned": false, 00:19:48.022 "supported_io_types": { 00:19:48.022 "read": true, 00:19:48.022 "write": true, 00:19:48.022 "unmap": true, 00:19:48.022 "flush": true, 00:19:48.022 "reset": false, 00:19:48.022 "nvme_admin": false, 00:19:48.022 "nvme_io": false, 00:19:48.022 "nvme_io_md": false, 00:19:48.022 "write_zeroes": true, 00:19:48.022 "zcopy": false, 00:19:48.022 "get_zone_info": false, 00:19:48.022 "zone_management": false, 00:19:48.022 "zone_append": false, 00:19:48.022 "compare": false, 00:19:48.022 "compare_and_write": false, 00:19:48.022 "abort": false, 00:19:48.022 "seek_hole": false, 00:19:48.022 "seek_data": false, 00:19:48.022 "copy": false, 00:19:48.022 "nvme_iov_md": false 00:19:48.022 }, 00:19:48.022 "driver_specific": { 00:19:48.022 "ftl": { 00:19:48.022 "base_bdev": "93451809-4c6d-4a43-b309-f4c207d89a08", 00:19:48.022 "cache": "nvc0n1p0" 00:19:48.022 } 00:19:48.022 } 00:19:48.022 } 00:19:48.022 ] 00:19:48.022 14:00:41 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:19:48.022 14:00:41 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:48.022 14:00:41 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:48.280 14:00:41 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:48.280 14:00:41 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:48.540 [2024-12-11 14:00:41.444855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.540 [2024-12-11 14:00:41.444916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:48.540 [2024-12-11 14:00:41.444932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:48.540 [2024-12-11 14:00:41.444961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.540 [2024-12-11 14:00:41.445022] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:48.540 [2024-12-11 14:00:41.449467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.540 [2024-12-11 14:00:41.449505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:48.540 [2024-12-11 14:00:41.449521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.427 ms 00:19:48.540 [2024-12-11 14:00:41.449539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.540 [2024-12-11 14:00:41.450391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.540 [2024-12-11 14:00:41.450421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:48.540 [2024-12-11 14:00:41.450436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:19:48.540 [2024-12-11 14:00:41.450446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.540 [2024-12-11 14:00:41.452979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.540 [2024-12-11 14:00:41.453006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:48.540 [2024-12-11 14:00:41.453022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.493 ms 00:19:48.540 [2024-12-11 14:00:41.453032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.540 [2024-12-11 14:00:41.458203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.540 [2024-12-11 14:00:41.458241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:48.540 [2024-12-11 14:00:41.458256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.127 ms 00:19:48.540 [2024-12-11 14:00:41.458267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.540 [2024-12-11 14:00:41.495406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.540 [2024-12-11 14:00:41.495448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:48.540 [2024-12-11 14:00:41.495498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.074 ms 00:19:48.540 [2024-12-11 14:00:41.495509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.540 [2024-12-11 14:00:41.519234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.540 [2024-12-11 14:00:41.519277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:48.540 [2024-12-11 14:00:41.519299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.687 ms 00:19:48.540 [2024-12-11 14:00:41.519311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.540 [2024-12-11 14:00:41.519609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.540 [2024-12-11 14:00:41.519623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:48.540 [2024-12-11 14:00:41.519637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:19:48.540 [2024-12-11 14:00:41.519648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.540 [2024-12-11 14:00:41.556203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.540 [2024-12-11 14:00:41.556245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:48.540 [2024-12-11 14:00:41.556261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.572 ms 00:19:48.540 [2024-12-11 14:00:41.556271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.800 [2024-12-11 14:00:41.592481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.800 [2024-12-11 14:00:41.592536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:48.800 [2024-12-11 14:00:41.592553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.177 ms 00:19:48.800 [2024-12-11 14:00:41.592563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.800 [2024-12-11 14:00:41.627657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.800 [2024-12-11 14:00:41.627698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:48.800 [2024-12-11 14:00:41.627714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.066 ms 00:19:48.800 [2024-12-11 14:00:41.627724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.800 [2024-12-11 14:00:41.662881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.800 [2024-12-11 14:00:41.662922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:48.801 [2024-12-11 14:00:41.662954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.009 ms 00:19:48.801 [2024-12-11 14:00:41.662964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.801 [2024-12-11 14:00:41.663032] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:48.801 [2024-12-11 14:00:41.663049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.663990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:48.801 [2024-12-11 14:00:41.664154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:48.802 [2024-12-11 14:00:41.664320] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:48.802 [2024-12-11 14:00:41.664332] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c7227735-778b-47c5-8b5e-88e4a2abe14b 00:19:48.802 [2024-12-11 14:00:41.664343] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:48.802 [2024-12-11 14:00:41.664358] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:48.802 [2024-12-11 14:00:41.664368] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:48.802 [2024-12-11 14:00:41.664384] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:48.802 [2024-12-11 14:00:41.664393] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:48.802 [2024-12-11 14:00:41.664406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:48.802 [2024-12-11 14:00:41.664416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:48.802 [2024-12-11 14:00:41.664428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:48.802 [2024-12-11 14:00:41.664438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:48.802 [2024-12-11 14:00:41.664450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.802 [2024-12-11 14:00:41.664461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:48.802 [2024-12-11 14:00:41.664475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:19:48.802 [2024-12-11 14:00:41.664485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.802 [2024-12-11 14:00:41.684550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.802 [2024-12-11 14:00:41.684592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:48.802 [2024-12-11 14:00:41.684607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.996 ms 00:19:48.802 [2024-12-11 14:00:41.684618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.802 [2024-12-11 14:00:41.685182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.802 [2024-12-11 14:00:41.685204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:48.802 [2024-12-11 14:00:41.685218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:19:48.802 [2024-12-11 14:00:41.685228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.802 [2024-12-11 14:00:41.755723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.802 [2024-12-11 14:00:41.755767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.802 [2024-12-11 14:00:41.755784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.802 [2024-12-11 14:00:41.755795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.802 [2024-12-11 14:00:41.755889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.802 [2024-12-11 14:00:41.755902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.802 [2024-12-11 14:00:41.755916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.802 [2024-12-11 14:00:41.755926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.802 [2024-12-11 14:00:41.756049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.802 [2024-12-11 14:00:41.756067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.802 [2024-12-11 14:00:41.756080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.802 [2024-12-11 14:00:41.756090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.802 [2024-12-11 14:00:41.756137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.802 [2024-12-11 14:00:41.756148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.802 [2024-12-11 14:00:41.756161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.802 [2024-12-11 14:00:41.756171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.062 [2024-12-11 14:00:41.886749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.062 [2024-12-11 14:00:41.886806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.062 [2024-12-11 14:00:41.886832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.062 [2024-12-11 14:00:41.886844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.062 [2024-12-11 14:00:41.986732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.062 [2024-12-11 14:00:41.986804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.062 [2024-12-11 14:00:41.986856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.062 [2024-12-11 14:00:41.986869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.062 [2024-12-11 14:00:41.987019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.062 [2024-12-11 14:00:41.987032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.062 [2024-12-11 14:00:41.987050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.062 [2024-12-11 14:00:41.987061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.062 [2024-12-11 14:00:41.987189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.062 [2024-12-11 14:00:41.987202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.062 [2024-12-11 14:00:41.987215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.062 [2024-12-11 14:00:41.987226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.062 [2024-12-11 14:00:41.987387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.062 [2024-12-11 14:00:41.987402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.062 [2024-12-11 14:00:41.987416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.062 [2024-12-11 14:00:41.987429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.062 [2024-12-11 14:00:41.987510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.062 [2024-12-11 14:00:41.987522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:49.062 [2024-12-11 14:00:41.987536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.062 [2024-12-11 14:00:41.987546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.062 [2024-12-11 14:00:41.987618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.062 [2024-12-11 14:00:41.987629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.062 [2024-12-11 14:00:41.987643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.062 [2024-12-11 14:00:41.987653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.062 [2024-12-11 14:00:41.987741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.062 [2024-12-11 14:00:41.987753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.062 [2024-12-11 14:00:41.987766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.062 [2024-12-11 14:00:41.987776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.062 [2024-12-11 14:00:41.988041] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.030 ms, result 0 00:19:49.062 true 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77923 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77923 ']' 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77923 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77923 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.062 killing process with pid 77923 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77923' 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77923 00:19:49.062 14:00:42 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77923 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.629 14:00:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:55.629 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:55.629 fio-3.35 00:19:55.629 Starting 1 thread 00:20:00.902 00:20:00.902 test: (groupid=0, jobs=1): err= 0: pid=78158: Wed Dec 11 14:00:53 2024 00:20:00.903 read: IOPS=903, BW=60.0MiB/s (62.9MB/s)(255MiB/4243msec) 00:20:00.903 slat (nsec): min=4514, max=41483, avg=6523.32, stdev=2923.65 00:20:00.903 clat (usec): min=370, max=722, avg=503.31, stdev=43.83 00:20:00.903 lat (usec): min=386, max=741, avg=509.83, stdev=44.38 00:20:00.903 clat percentiles (usec): 00:20:00.903 | 1.00th=[ 392], 5.00th=[ 433], 10.00th=[ 457], 20.00th=[ 461], 00:20:00.903 | 30.00th=[ 469], 40.00th=[ 494], 50.00th=[ 523], 60.00th=[ 529], 00:20:00.903 | 70.00th=[ 529], 80.00th=[ 529], 90.00th=[ 545], 95.00th=[ 570], 00:20:00.903 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 685], 99.95th=[ 717], 00:20:00.903 | 99.99th=[ 725] 00:20:00.903 write: IOPS=909, BW=60.4MiB/s (63.4MB/s)(256MiB/4238msec); 0 zone resets 00:20:00.903 slat (nsec): min=15671, max=95313, avg=19893.30, stdev=5421.21 00:20:00.903 clat (usec): min=401, max=1117, avg=562.90, stdev=72.28 00:20:00.903 lat (usec): min=421, max=1144, avg=582.79, stdev=73.84 00:20:00.903 clat percentiles (usec): 00:20:00.903 | 1.00th=[ 433], 5.00th=[ 478], 10.00th=[ 482], 20.00th=[ 529], 00:20:00.903 | 30.00th=[ 545], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 553], 00:20:00.903 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 619], 95.00th=[ 635], 00:20:00.903 | 99.00th=[ 963], 99.50th=[ 996], 99.90th=[ 1057], 99.95th=[ 1090], 00:20:00.903 | 99.99th=[ 1123] 00:20:00.903 bw ( KiB/s): min=56440, max=63512, per=99.91%, avg=61812.00, stdev=2214.52, samples=8 00:20:00.903 iops : min= 830, max= 934, avg=909.00, stdev=32.57, samples=8 00:20:00.903 lat (usec) : 500=28.46%, 750=70.67%, 1000=0.64% 00:20:00.903 lat (msec) : 2=0.23% 00:20:00.903 cpu : usr=99.25%, sys=0.14%, ctx=9, majf=0, minf=1167 00:20:00.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:00.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.903 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:00.903 00:20:00.903 Run status group 0 (all jobs): 00:20:00.903 READ: bw=60.0MiB/s (62.9MB/s), 60.0MiB/s-60.0MiB/s (62.9MB/s-62.9MB/s), io=255MiB (267MB), run=4243-4243msec 00:20:00.903 WRITE: bw=60.4MiB/s (63.4MB/s), 60.4MiB/s-60.4MiB/s (63.4MB/s-63.4MB/s), io=256MiB (269MB), run=4238-4238msec 00:20:02.806 ----------------------------------------------------- 00:20:02.806 Suppressions used: 00:20:02.806 count bytes template 00:20:02.806 1 5 /usr/src/fio/parse.c 00:20:02.806 1 8 libtcmalloc_minimal.so 00:20:02.806 1 904 libcrypto.so 00:20:02.806 ----------------------------------------------------- 00:20:02.806 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:02.806 14:00:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:02.806 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:02.806 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:02.806 fio-3.35 00:20:02.806 Starting 2 threads 00:20:29.415 00:20:29.415 first_half: (groupid=0, jobs=1): err= 0: pid=78261: Wed Dec 11 14:01:21 2024 00:20:29.415 read: IOPS=2710, BW=10.6MiB/s (11.1MB/s)(255MiB/24097msec) 00:20:29.415 slat (nsec): min=3582, max=66557, avg=6147.01, stdev=2129.82 00:20:29.415 clat (usec): min=1051, max=264167, avg=37355.80, stdev=17385.40 00:20:29.415 lat (usec): min=1057, max=264173, avg=37361.94, stdev=17385.62 00:20:29.415 clat percentiles (msec): 00:20:29.415 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:20:29.415 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:20:29.415 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 57], 00:20:29.415 | 99.00th=[ 142], 99.50th=[ 159], 99.90th=[ 182], 99.95th=[ 190], 00:20:29.415 | 99.99th=[ 255] 00:20:29.415 write: IOPS=3271, BW=12.8MiB/s (13.4MB/s)(256MiB/20032msec); 0 zone resets 00:20:29.415 slat (usec): min=4, max=831, avg= 7.79, stdev= 9.22 00:20:29.415 clat (usec): min=414, max=90890, avg=9813.72, stdev=16391.91 00:20:29.415 lat (usec): min=426, max=90897, avg=9821.51, stdev=16391.96 00:20:29.415 clat percentiles (usec): 00:20:29.415 | 1.00th=[ 1004], 5.00th=[ 1303], 10.00th=[ 1582], 20.00th=[ 1958], 00:20:29.415 | 30.00th=[ 3064], 40.00th=[ 4621], 50.00th=[ 5538], 60.00th=[ 6456], 00:20:29.415 | 70.00th=[ 7439], 80.00th=[10552], 90.00th=[13829], 95.00th=[36963], 00:20:29.415 | 99.00th=[83362], 99.50th=[85459], 99.90th=[88605], 99.95th=[89654], 00:20:29.415 | 99.99th=[89654] 00:20:29.415 bw ( KiB/s): min= 400, max=43936, per=91.82%, avg=21845.33, stdev=13609.04, samples=24 00:20:29.415 iops : min= 100, max=10984, avg=5461.33, stdev=3402.26, samples=24 00:20:29.415 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.45% 00:20:29.415 lat (msec) : 2=10.09%, 4=7.58%, 10=21.31%, 20=7.41%, 50=47.14% 00:20:29.415 lat (msec) : 100=5.01%, 250=0.95%, 500=0.01% 00:20:29.415 cpu : usr=99.24%, sys=0.18%, ctx=37, majf=0, minf=5595 00:20:29.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:29.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.415 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:29.415 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:29.415 second_half: (groupid=0, jobs=1): err= 0: pid=78262: Wed Dec 11 14:01:21 2024 00:20:29.415 read: IOPS=2690, BW=10.5MiB/s (11.0MB/s)(255MiB/24275msec) 00:20:29.415 slat (nsec): min=3548, max=34690, avg=6184.63, stdev=2122.67 00:20:29.415 clat (usec): min=1141, max=278442, avg=36684.45, stdev=20682.55 00:20:29.415 lat (usec): min=1147, max=278446, avg=36690.64, stdev=20682.83 00:20:29.415 clat percentiles (msec): 00:20:29.415 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 33], 00:20:29.415 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:20:29.415 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 53], 00:20:29.415 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 205], 99.95th=[ 222], 00:20:29.415 | 99.99th=[ 271] 00:20:29.415 write: IOPS=2974, BW=11.6MiB/s (12.2MB/s)(256MiB/22036msec); 0 zone resets 00:20:29.415 slat (usec): min=4, max=547, avg= 7.90, stdev= 5.15 00:20:29.416 clat (usec): min=408, max=92481, avg=10831.93, stdev=17861.67 00:20:29.416 lat (usec): min=437, max=92487, avg=10839.83, stdev=17861.84 00:20:29.416 clat percentiles (usec): 00:20:29.416 | 1.00th=[ 988], 5.00th=[ 1270], 10.00th=[ 1467], 20.00th=[ 1729], 00:20:29.416 | 30.00th=[ 2008], 40.00th=[ 3130], 50.00th=[ 4752], 60.00th=[ 6194], 00:20:29.416 | 70.00th=[ 7701], 80.00th=[11863], 90.00th=[32113], 95.00th=[43254], 00:20:29.416 | 99.00th=[84411], 99.50th=[86508], 99.90th=[89654], 99.95th=[89654], 00:20:29.416 | 99.99th=[90702] 00:20:29.416 bw ( KiB/s): min= 328, max=54296, per=88.14%, avg=20971.52, stdev=16121.37, samples=25 00:20:29.416 iops : min= 82, max=13574, avg=5242.88, stdev=4030.34, samples=25 00:20:29.416 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.47% 00:20:29.416 lat (msec) : 2=14.51%, 4=8.21%, 10=16.13%, 20=6.36%, 50=49.20% 00:20:29.416 lat (msec) : 100=3.59%, 250=1.45%, 500=0.01% 00:20:29.416 cpu : usr=99.26%, sys=0.15%, ctx=37, majf=0, minf=5512 00:20:29.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:29.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.416 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:29.416 issued rwts: total=65312,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:29.416 00:20:29.416 Run status group 0 (all jobs): 00:20:29.416 READ: bw=21.0MiB/s (22.0MB/s), 10.5MiB/s-10.6MiB/s (11.0MB/s-11.1MB/s), io=510MiB (535MB), run=24097-24275msec 00:20:29.416 WRITE: bw=23.2MiB/s (24.4MB/s), 11.6MiB/s-12.8MiB/s (12.2MB/s-13.4MB/s), io=512MiB (537MB), run=20032-22036msec 00:20:31.339 ----------------------------------------------------- 00:20:31.339 Suppressions used: 00:20:31.339 count bytes template 00:20:31.339 2 10 /usr/src/fio/parse.c 00:20:31.339 4 384 /usr/src/fio/iolog.c 00:20:31.339 1 8 libtcmalloc_minimal.so 00:20:31.339 1 904 libcrypto.so 00:20:31.339 ----------------------------------------------------- 00:20:31.339 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:31.339 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.340 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:31.340 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.340 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:31.340 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:31.340 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:31.340 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:31.340 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:31.340 14:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:31.340 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:31.340 fio-3.35 00:20:31.340 Starting 1 thread 00:20:49.446 00:20:49.446 test: (groupid=0, jobs=1): err= 0: pid=78586: Wed Dec 11 14:01:39 2024 00:20:49.446 read: IOPS=7563, BW=29.5MiB/s (31.0MB/s)(255MiB/8621msec) 00:20:49.446 slat (nsec): min=3502, max=40896, avg=5489.00, stdev=2223.09 00:20:49.446 clat (usec): min=712, max=33489, avg=16914.88, stdev=1182.61 00:20:49.446 lat (usec): min=722, max=33493, avg=16920.36, stdev=1182.60 00:20:49.446 clat percentiles (usec): 00:20:49.446 | 1.00th=[15664], 5.00th=[15926], 10.00th=[16188], 20.00th=[16319], 00:20:49.446 | 30.00th=[16450], 40.00th=[16581], 50.00th=[16712], 60.00th=[16909], 00:20:49.446 | 70.00th=[17171], 80.00th=[17171], 90.00th=[17695], 95.00th=[17957], 00:20:49.446 | 99.00th=[21103], 99.50th=[23725], 99.90th=[28705], 99.95th=[29492], 00:20:49.446 | 99.99th=[32900] 00:20:49.446 write: IOPS=12.4k, BW=48.3MiB/s (50.7MB/s)(256MiB/5299msec); 0 zone resets 00:20:49.446 slat (usec): min=4, max=617, avg= 7.93, stdev= 6.65 00:20:49.446 clat (usec): min=654, max=59143, avg=10295.73, stdev=12594.73 00:20:49.446 lat (usec): min=673, max=59152, avg=10303.65, stdev=12594.73 00:20:49.446 clat percentiles (usec): 00:20:49.446 | 1.00th=[ 1045], 5.00th=[ 1254], 10.00th=[ 1401], 20.00th=[ 1598], 00:20:49.446 | 30.00th=[ 1778], 40.00th=[ 2114], 50.00th=[ 6325], 60.00th=[ 7701], 00:20:49.446 | 70.00th=[ 9241], 80.00th=[11863], 90.00th=[37487], 95.00th=[39060], 00:20:49.446 | 99.00th=[41157], 99.50th=[41681], 99.90th=[47973], 99.95th=[49021], 00:20:49.446 | 99.99th=[54264] 00:20:49.446 bw ( KiB/s): min=25032, max=67240, per=96.34%, avg=47661.82, stdev=11824.28, samples=11 00:20:49.446 iops : min= 6258, max=16810, avg=11915.64, stdev=2956.02, samples=11 00:20:49.446 lat (usec) : 750=0.01%, 1000=0.31% 00:20:49.446 lat (msec) : 2=18.95%, 4=1.78%, 10=15.86%, 20=53.79%, 50=9.29% 00:20:49.446 lat (msec) : 100=0.02% 00:20:49.446 cpu : usr=98.76%, sys=0.47%, ctx=26, majf=0, minf=5563 00:20:49.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:49.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.446 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:49.446 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:49.446 00:20:49.446 Run status group 0 (all jobs): 00:20:49.446 READ: bw=29.5MiB/s (31.0MB/s), 29.5MiB/s-29.5MiB/s (31.0MB/s-31.0MB/s), io=255MiB (267MB), run=8621-8621msec 00:20:49.446 WRITE: bw=48.3MiB/s (50.7MB/s), 48.3MiB/s-48.3MiB/s (50.7MB/s-50.7MB/s), io=256MiB (268MB), run=5299-5299msec 00:20:49.446 ----------------------------------------------------- 00:20:49.446 Suppressions used: 00:20:49.446 count bytes template 00:20:49.446 1 5 /usr/src/fio/parse.c 00:20:49.446 2 192 /usr/src/fio/iolog.c 00:20:49.446 1 8 libtcmalloc_minimal.so 00:20:49.446 1 904 libcrypto.so 00:20:49.446 ----------------------------------------------------- 00:20:49.446 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:49.446 Remove shared memory files 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58903 /dev/shm/spdk_tgt_trace.pid76814 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:49.446 00:20:49.446 real 1m11.777s 00:20:49.446 user 2m37.259s 00:20:49.446 sys 0m3.853s 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.446 14:01:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:49.446 ************************************ 00:20:49.446 END TEST ftl_fio_basic 00:20:49.446 ************************************ 00:20:49.446 14:01:41 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:49.446 14:01:41 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:49.446 14:01:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.447 14:01:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:49.447 ************************************ 00:20:49.447 START TEST ftl_bdevperf 00:20:49.447 ************************************ 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:49.447 * Looking for test storage... 00:20:49.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:49.447 14:01:41 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.447 --rc genhtml_branch_coverage=1 00:20:49.447 --rc genhtml_function_coverage=1 00:20:49.447 --rc genhtml_legend=1 00:20:49.447 --rc geninfo_all_blocks=1 00:20:49.447 --rc geninfo_unexecuted_blocks=1 00:20:49.447 00:20:49.447 ' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.447 --rc genhtml_branch_coverage=1 00:20:49.447 --rc genhtml_function_coverage=1 00:20:49.447 --rc genhtml_legend=1 00:20:49.447 --rc geninfo_all_blocks=1 00:20:49.447 --rc geninfo_unexecuted_blocks=1 00:20:49.447 00:20:49.447 ' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.447 --rc genhtml_branch_coverage=1 00:20:49.447 --rc genhtml_function_coverage=1 00:20:49.447 --rc genhtml_legend=1 00:20:49.447 --rc geninfo_all_blocks=1 00:20:49.447 --rc geninfo_unexecuted_blocks=1 00:20:49.447 00:20:49.447 ' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.447 --rc genhtml_branch_coverage=1 00:20:49.447 --rc genhtml_function_coverage=1 00:20:49.447 --rc genhtml_legend=1 00:20:49.447 --rc geninfo_all_blocks=1 00:20:49.447 --rc geninfo_unexecuted_blocks=1 00:20:49.447 00:20:49.447 ' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78830 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78830 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78830 ']' 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.447 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:49.447 [2024-12-11 14:01:42.145571] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:20:49.447 [2024-12-11 14:01:42.145893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78830 ] 00:20:49.447 [2024-12-11 14:01:42.328972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.447 [2024-12-11 14:01:42.438418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.017 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.017 14:01:42 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:50.017 14:01:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:50.017 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:50.017 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:50.017 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:50.017 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:50.017 14:01:42 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:50.277 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:50.277 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:50.277 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:50.277 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:50.277 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:50.277 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:50.277 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:50.277 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:50.536 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:50.536 { 00:20:50.536 "name": "nvme0n1", 00:20:50.536 "aliases": [ 00:20:50.536 "9c246632-5981-4cc7-b3e4-acf66b1bcfeb" 00:20:50.536 ], 00:20:50.536 "product_name": "NVMe disk", 00:20:50.536 "block_size": 4096, 00:20:50.536 "num_blocks": 1310720, 00:20:50.536 "uuid": "9c246632-5981-4cc7-b3e4-acf66b1bcfeb", 00:20:50.536 "numa_id": -1, 00:20:50.536 "assigned_rate_limits": { 00:20:50.536 "rw_ios_per_sec": 0, 00:20:50.536 "rw_mbytes_per_sec": 0, 00:20:50.536 "r_mbytes_per_sec": 0, 00:20:50.536 "w_mbytes_per_sec": 0 00:20:50.536 }, 00:20:50.536 "claimed": true, 00:20:50.536 "claim_type": "read_many_write_one", 00:20:50.536 "zoned": false, 00:20:50.536 "supported_io_types": { 00:20:50.536 "read": true, 00:20:50.536 "write": true, 00:20:50.536 "unmap": true, 00:20:50.536 "flush": true, 00:20:50.536 "reset": true, 00:20:50.536 "nvme_admin": true, 00:20:50.536 "nvme_io": true, 00:20:50.536 "nvme_io_md": false, 00:20:50.536 "write_zeroes": true, 00:20:50.536 "zcopy": false, 00:20:50.536 "get_zone_info": false, 00:20:50.536 "zone_management": false, 00:20:50.536 "zone_append": false, 00:20:50.536 "compare": true, 00:20:50.536 "compare_and_write": false, 00:20:50.536 "abort": true, 00:20:50.536 "seek_hole": false, 00:20:50.536 "seek_data": false, 00:20:50.536 "copy": true, 00:20:50.536 "nvme_iov_md": false 00:20:50.536 }, 00:20:50.536 "driver_specific": { 00:20:50.536 "nvme": [ 00:20:50.536 { 00:20:50.536 "pci_address": "0000:00:11.0", 00:20:50.536 "trid": { 00:20:50.536 "trtype": "PCIe", 00:20:50.536 "traddr": "0000:00:11.0" 00:20:50.536 }, 00:20:50.537 "ctrlr_data": { 00:20:50.537 "cntlid": 0, 00:20:50.537 "vendor_id": "0x1b36", 00:20:50.537 "model_number": "QEMU NVMe Ctrl", 00:20:50.537 "serial_number": "12341", 00:20:50.537 "firmware_revision": "8.0.0", 00:20:50.537 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:50.537 "oacs": { 00:20:50.537 "security": 0, 00:20:50.537 "format": 1, 00:20:50.537 "firmware": 0, 00:20:50.537 "ns_manage": 1 00:20:50.537 }, 00:20:50.537 "multi_ctrlr": false, 00:20:50.537 "ana_reporting": false 00:20:50.537 }, 00:20:50.537 "vs": { 00:20:50.537 "nvme_version": "1.4" 00:20:50.537 }, 00:20:50.537 "ns_data": { 00:20:50.537 "id": 1, 00:20:50.537 "can_share": false 00:20:50.537 } 00:20:50.537 } 00:20:50.537 ], 00:20:50.537 "mp_policy": "active_passive" 00:20:50.537 } 00:20:50.537 } 00:20:50.537 ]' 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:50.537 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:50.797 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e236fe0f-2fb9-4519-bd16-95e17a042f46 00:20:50.797 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:50.797 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e236fe0f-2fb9-4519-bd16-95e17a042f46 00:20:51.056 14:01:43 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:51.315 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=722668ac-0472-43e7-a58c-5e76d2fde069 00:20:51.315 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 722668ac-0472-43e7-a58c-5e76d2fde069 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:51.574 { 00:20:51.574 "name": "cfe89a7f-2fc2-406e-96d6-136ce411a4ef", 00:20:51.574 "aliases": [ 00:20:51.574 "lvs/nvme0n1p0" 00:20:51.574 ], 00:20:51.574 "product_name": "Logical Volume", 00:20:51.574 "block_size": 4096, 00:20:51.574 "num_blocks": 26476544, 00:20:51.574 "uuid": "cfe89a7f-2fc2-406e-96d6-136ce411a4ef", 00:20:51.574 "assigned_rate_limits": { 00:20:51.574 "rw_ios_per_sec": 0, 00:20:51.574 "rw_mbytes_per_sec": 0, 00:20:51.574 "r_mbytes_per_sec": 0, 00:20:51.574 "w_mbytes_per_sec": 0 00:20:51.574 }, 00:20:51.574 "claimed": false, 00:20:51.574 "zoned": false, 00:20:51.574 "supported_io_types": { 00:20:51.574 "read": true, 00:20:51.574 "write": true, 00:20:51.574 "unmap": true, 00:20:51.574 "flush": false, 00:20:51.574 "reset": true, 00:20:51.574 "nvme_admin": false, 00:20:51.574 "nvme_io": false, 00:20:51.574 "nvme_io_md": false, 00:20:51.574 "write_zeroes": true, 00:20:51.574 "zcopy": false, 00:20:51.574 "get_zone_info": false, 00:20:51.574 "zone_management": false, 00:20:51.574 "zone_append": false, 00:20:51.574 "compare": false, 00:20:51.574 "compare_and_write": false, 00:20:51.574 "abort": false, 00:20:51.574 "seek_hole": true, 00:20:51.574 "seek_data": true, 00:20:51.574 "copy": false, 00:20:51.574 "nvme_iov_md": false 00:20:51.574 }, 00:20:51.574 "driver_specific": { 00:20:51.574 "lvol": { 00:20:51.574 "lvol_store_uuid": "722668ac-0472-43e7-a58c-5e76d2fde069", 00:20:51.574 "base_bdev": "nvme0n1", 00:20:51.574 "thin_provision": true, 00:20:51.574 "num_allocated_clusters": 0, 00:20:51.574 "snapshot": false, 00:20:51.574 "clone": false, 00:20:51.574 "esnap_clone": false 00:20:51.574 } 00:20:51.574 } 00:20:51.574 } 00:20:51.574 ]' 00:20:51.574 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:51.834 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:51.834 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:51.834 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:51.834 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:51.834 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:51.834 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:51.834 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:51.834 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:52.094 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:52.094 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:52.094 14:01:44 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:52.094 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:52.094 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:52.094 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:52.094 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:52.094 14:01:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:52.354 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:52.354 { 00:20:52.354 "name": "cfe89a7f-2fc2-406e-96d6-136ce411a4ef", 00:20:52.354 "aliases": [ 00:20:52.354 "lvs/nvme0n1p0" 00:20:52.354 ], 00:20:52.354 "product_name": "Logical Volume", 00:20:52.354 "block_size": 4096, 00:20:52.354 "num_blocks": 26476544, 00:20:52.354 "uuid": "cfe89a7f-2fc2-406e-96d6-136ce411a4ef", 00:20:52.354 "assigned_rate_limits": { 00:20:52.354 "rw_ios_per_sec": 0, 00:20:52.354 "rw_mbytes_per_sec": 0, 00:20:52.354 "r_mbytes_per_sec": 0, 00:20:52.354 "w_mbytes_per_sec": 0 00:20:52.354 }, 00:20:52.354 "claimed": false, 00:20:52.354 "zoned": false, 00:20:52.354 "supported_io_types": { 00:20:52.354 "read": true, 00:20:52.354 "write": true, 00:20:52.354 "unmap": true, 00:20:52.354 "flush": false, 00:20:52.354 "reset": true, 00:20:52.354 "nvme_admin": false, 00:20:52.354 "nvme_io": false, 00:20:52.354 "nvme_io_md": false, 00:20:52.354 "write_zeroes": true, 00:20:52.354 "zcopy": false, 00:20:52.354 "get_zone_info": false, 00:20:52.354 "zone_management": false, 00:20:52.354 "zone_append": false, 00:20:52.354 "compare": false, 00:20:52.354 "compare_and_write": false, 00:20:52.354 "abort": false, 00:20:52.354 "seek_hole": true, 00:20:52.354 "seek_data": true, 00:20:52.354 "copy": false, 00:20:52.354 "nvme_iov_md": false 00:20:52.354 }, 00:20:52.354 "driver_specific": { 00:20:52.354 "lvol": { 00:20:52.354 "lvol_store_uuid": "722668ac-0472-43e7-a58c-5e76d2fde069", 00:20:52.354 "base_bdev": "nvme0n1", 00:20:52.354 "thin_provision": true, 00:20:52.354 "num_allocated_clusters": 0, 00:20:52.354 "snapshot": false, 00:20:52.354 "clone": false, 00:20:52.354 "esnap_clone": false 00:20:52.354 } 00:20:52.354 } 00:20:52.354 } 00:20:52.354 ]' 00:20:52.354 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:52.354 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:52.354 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:52.354 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:52.354 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:52.354 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:52.354 14:01:45 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:52.354 14:01:45 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:52.613 14:01:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:20:52.613 14:01:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:52.613 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:52.613 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:52.613 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:52.613 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:52.613 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfe89a7f-2fc2-406e-96d6-136ce411a4ef 00:20:52.873 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:52.873 { 00:20:52.873 "name": "cfe89a7f-2fc2-406e-96d6-136ce411a4ef", 00:20:52.873 "aliases": [ 00:20:52.873 "lvs/nvme0n1p0" 00:20:52.873 ], 00:20:52.873 "product_name": "Logical Volume", 00:20:52.873 "block_size": 4096, 00:20:52.873 "num_blocks": 26476544, 00:20:52.873 "uuid": "cfe89a7f-2fc2-406e-96d6-136ce411a4ef", 00:20:52.873 "assigned_rate_limits": { 00:20:52.873 "rw_ios_per_sec": 0, 00:20:52.873 "rw_mbytes_per_sec": 0, 00:20:52.873 "r_mbytes_per_sec": 0, 00:20:52.873 "w_mbytes_per_sec": 0 00:20:52.873 }, 00:20:52.873 "claimed": false, 00:20:52.873 "zoned": false, 00:20:52.873 "supported_io_types": { 00:20:52.873 "read": true, 00:20:52.873 "write": true, 00:20:52.873 "unmap": true, 00:20:52.873 "flush": false, 00:20:52.873 "reset": true, 00:20:52.873 "nvme_admin": false, 00:20:52.873 "nvme_io": false, 00:20:52.873 "nvme_io_md": false, 00:20:52.873 "write_zeroes": true, 00:20:52.873 "zcopy": false, 00:20:52.873 "get_zone_info": false, 00:20:52.873 "zone_management": false, 00:20:52.873 "zone_append": false, 00:20:52.873 "compare": false, 00:20:52.873 "compare_and_write": false, 00:20:52.874 "abort": false, 00:20:52.874 "seek_hole": true, 00:20:52.874 "seek_data": true, 00:20:52.874 "copy": false, 00:20:52.874 "nvme_iov_md": false 00:20:52.874 }, 00:20:52.874 "driver_specific": { 00:20:52.874 "lvol": { 00:20:52.874 "lvol_store_uuid": "722668ac-0472-43e7-a58c-5e76d2fde069", 00:20:52.874 "base_bdev": "nvme0n1", 00:20:52.874 "thin_provision": true, 00:20:52.874 "num_allocated_clusters": 0, 00:20:52.874 "snapshot": false, 00:20:52.874 "clone": false, 00:20:52.874 "esnap_clone": false 00:20:52.874 } 00:20:52.874 } 00:20:52.874 } 00:20:52.874 ]' 00:20:52.874 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:52.874 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:52.874 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:52.874 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:52.874 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:52.874 14:01:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:52.874 14:01:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:20:52.874 14:01:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cfe89a7f-2fc2-406e-96d6-136ce411a4ef -c nvc0n1p0 --l2p_dram_limit 20 00:20:53.134 [2024-12-11 14:01:46.023793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.134 [2024-12-11 14:01:46.023861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:53.134 [2024-12-11 14:01:46.023895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:53.134 [2024-12-11 14:01:46.023908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.134 [2024-12-11 14:01:46.023970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.134 [2024-12-11 14:01:46.023984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:53.134 [2024-12-11 14:01:46.023995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:53.134 [2024-12-11 14:01:46.024008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.135 [2024-12-11 14:01:46.024043] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:53.135 [2024-12-11 14:01:46.025001] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:53.135 [2024-12-11 14:01:46.025028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.135 [2024-12-11 14:01:46.025041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:53.135 [2024-12-11 14:01:46.025053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:20:53.135 [2024-12-11 14:01:46.025066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.135 [2024-12-11 14:01:46.025141] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1f2138cb-70a8-49f8-a5b1-8b0171158246 00:20:53.135 [2024-12-11 14:01:46.026538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.135 [2024-12-11 14:01:46.026574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:53.135 [2024-12-11 14:01:46.026593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:53.135 [2024-12-11 14:01:46.026603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.135 [2024-12-11 14:01:46.033921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.135 [2024-12-11 14:01:46.033949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:53.135 [2024-12-11 14:01:46.033964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.287 ms 00:20:53.135 [2024-12-11 14:01:46.033978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.135 [2024-12-11 14:01:46.034083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.135 [2024-12-11 14:01:46.034098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:53.135 [2024-12-11 14:01:46.034116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:53.135 [2024-12-11 14:01:46.034127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.135 [2024-12-11 14:01:46.034195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.135 [2024-12-11 14:01:46.034207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:53.135 [2024-12-11 14:01:46.034220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:53.135 [2024-12-11 14:01:46.034230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.135 [2024-12-11 14:01:46.034257] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:53.135 [2024-12-11 14:01:46.039341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.135 [2024-12-11 14:01:46.039378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:53.135 [2024-12-11 14:01:46.039390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.103 ms 00:20:53.135 [2024-12-11 14:01:46.039424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.135 [2024-12-11 14:01:46.039468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.135 [2024-12-11 14:01:46.039481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:53.135 [2024-12-11 14:01:46.039492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:53.135 [2024-12-11 14:01:46.039504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.135 [2024-12-11 14:01:46.039537] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:53.135 [2024-12-11 14:01:46.039688] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:53.135 [2024-12-11 14:01:46.039702] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:53.135 [2024-12-11 14:01:46.039718] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:53.135 [2024-12-11 14:01:46.039732] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:53.135 [2024-12-11 14:01:46.039746] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:53.135 [2024-12-11 14:01:46.039757] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:53.135 [2024-12-11 14:01:46.039769] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:53.135 [2024-12-11 14:01:46.039780] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:53.135 [2024-12-11 14:01:46.039794] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:53.135 [2024-12-11 14:01:46.039807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.135 [2024-12-11 14:01:46.039819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:53.135 [2024-12-11 14:01:46.039830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:53.135 [2024-12-11 14:01:46.039842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.135 [2024-12-11 14:01:46.039938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.135 [2024-12-11 14:01:46.039953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:53.135 [2024-12-11 14:01:46.039964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:53.135 [2024-12-11 14:01:46.039978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.135 [2024-12-11 14:01:46.040058] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:53.135 [2024-12-11 14:01:46.040076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:53.135 [2024-12-11 14:01:46.040086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:53.135 [2024-12-11 14:01:46.040099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:53.135 [2024-12-11 14:01:46.040122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:53.135 [2024-12-11 14:01:46.040143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:53.135 [2024-12-11 14:01:46.040152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:53.135 [2024-12-11 14:01:46.040173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:53.135 [2024-12-11 14:01:46.040198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:53.135 [2024-12-11 14:01:46.040207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:53.135 [2024-12-11 14:01:46.040220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:53.135 [2024-12-11 14:01:46.040229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:53.135 [2024-12-11 14:01:46.040243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:53.135 [2024-12-11 14:01:46.040264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:53.135 [2024-12-11 14:01:46.040273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:53.135 [2024-12-11 14:01:46.040294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.135 [2024-12-11 14:01:46.040315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:53.135 [2024-12-11 14:01:46.040326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.135 [2024-12-11 14:01:46.040347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:53.135 [2024-12-11 14:01:46.040356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.135 [2024-12-11 14:01:46.040376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:53.135 [2024-12-11 14:01:46.040387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.135 [2024-12-11 14:01:46.040410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:53.135 [2024-12-11 14:01:46.040420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:53.135 [2024-12-11 14:01:46.040441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:53.135 [2024-12-11 14:01:46.040452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:53.135 [2024-12-11 14:01:46.040462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:53.135 [2024-12-11 14:01:46.040476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:53.135 [2024-12-11 14:01:46.040486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:53.135 [2024-12-11 14:01:46.040497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:53.135 [2024-12-11 14:01:46.040518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:53.135 [2024-12-11 14:01:46.040527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040538] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:53.135 [2024-12-11 14:01:46.040548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:53.135 [2024-12-11 14:01:46.040561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:53.135 [2024-12-11 14:01:46.040571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.135 [2024-12-11 14:01:46.040586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:53.135 [2024-12-11 14:01:46.040595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:53.135 [2024-12-11 14:01:46.040607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:53.135 [2024-12-11 14:01:46.040629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:53.135 [2024-12-11 14:01:46.040641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:53.135 [2024-12-11 14:01:46.040650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:53.135 [2024-12-11 14:01:46.040664] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:53.135 [2024-12-11 14:01:46.040676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:53.135 [2024-12-11 14:01:46.040690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:53.136 [2024-12-11 14:01:46.040701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:53.136 [2024-12-11 14:01:46.040713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:53.136 [2024-12-11 14:01:46.040724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:53.136 [2024-12-11 14:01:46.040737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:53.136 [2024-12-11 14:01:46.040747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:53.136 [2024-12-11 14:01:46.040760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:53.136 [2024-12-11 14:01:46.040770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:53.136 [2024-12-11 14:01:46.040786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:53.136 [2024-12-11 14:01:46.040797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:53.136 [2024-12-11 14:01:46.040810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:53.136 [2024-12-11 14:01:46.040820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:53.136 [2024-12-11 14:01:46.040843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:53.136 [2024-12-11 14:01:46.040853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:53.136 [2024-12-11 14:01:46.040866] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:53.136 [2024-12-11 14:01:46.040878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:53.136 [2024-12-11 14:01:46.040895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:53.136 [2024-12-11 14:01:46.040905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:53.136 [2024-12-11 14:01:46.040918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:53.136 [2024-12-11 14:01:46.040928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:53.136 [2024-12-11 14:01:46.040942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.136 [2024-12-11 14:01:46.040953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:53.136 [2024-12-11 14:01:46.040966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:20:53.136 [2024-12-11 14:01:46.040976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.136 [2024-12-11 14:01:46.041015] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:53.136 [2024-12-11 14:01:46.041027] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:57.330 [2024-12-11 14:01:49.486863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.486942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:57.330 [2024-12-11 14:01:49.486963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3451.441 ms 00:20:57.330 [2024-12-11 14:01:49.486974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.524401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.524449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:57.330 [2024-12-11 14:01:49.524467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.150 ms 00:20:57.330 [2024-12-11 14:01:49.524479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.524608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.524621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:57.330 [2024-12-11 14:01:49.524637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:57.330 [2024-12-11 14:01:49.524648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.581537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.581599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:57.330 [2024-12-11 14:01:49.581618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.922 ms 00:20:57.330 [2024-12-11 14:01:49.581628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.581688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.581699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:57.330 [2024-12-11 14:01:49.581713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:57.330 [2024-12-11 14:01:49.581726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.582219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.582235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:57.330 [2024-12-11 14:01:49.582248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:20:57.330 [2024-12-11 14:01:49.582258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.582365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.582378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:57.330 [2024-12-11 14:01:49.582394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:57.330 [2024-12-11 14:01:49.582404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.601442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.601479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:57.330 [2024-12-11 14:01:49.601495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.046 ms 00:20:57.330 [2024-12-11 14:01:49.601517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.614448] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:20:57.330 [2024-12-11 14:01:49.620326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.620508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:57.330 [2024-12-11 14:01:49.620530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.758 ms 00:20:57.330 [2024-12-11 14:01:49.620544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.703264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.703328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:57.330 [2024-12-11 14:01:49.703345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.821 ms 00:20:57.330 [2024-12-11 14:01:49.703358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.703542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.703562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:57.330 [2024-12-11 14:01:49.703574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:20:57.330 [2024-12-11 14:01:49.703591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.330 [2024-12-11 14:01:49.740068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.330 [2024-12-11 14:01:49.740112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:57.330 [2024-12-11 14:01:49.740127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.484 ms 00:20:57.331 [2024-12-11 14:01:49.740140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.331 [2024-12-11 14:01:49.775717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.331 [2024-12-11 14:01:49.775759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:57.331 [2024-12-11 14:01:49.775773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.593 ms 00:20:57.331 [2024-12-11 14:01:49.775802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.331 [2024-12-11 14:01:49.776496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.331 [2024-12-11 14:01:49.776520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:57.331 [2024-12-11 14:01:49.776532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:20:57.331 [2024-12-11 14:01:49.776545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.331 [2024-12-11 14:01:49.873744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.331 [2024-12-11 14:01:49.873798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:57.331 [2024-12-11 14:01:49.873813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.304 ms 00:20:57.331 [2024-12-11 14:01:49.873839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.331 [2024-12-11 14:01:49.911612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.331 [2024-12-11 14:01:49.911800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:57.331 [2024-12-11 14:01:49.911844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.757 ms 00:20:57.331 [2024-12-11 14:01:49.911859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.331 [2024-12-11 14:01:49.948646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.331 [2024-12-11 14:01:49.948689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:57.331 [2024-12-11 14:01:49.948702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.764 ms 00:20:57.331 [2024-12-11 14:01:49.948715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.331 [2024-12-11 14:01:49.984680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.331 [2024-12-11 14:01:49.984859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:57.331 [2024-12-11 14:01:49.984880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.987 ms 00:20:57.331 [2024-12-11 14:01:49.984893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.331 [2024-12-11 14:01:49.984933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.331 [2024-12-11 14:01:49.984950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:57.331 [2024-12-11 14:01:49.984961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:57.331 [2024-12-11 14:01:49.984974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.331 [2024-12-11 14:01:49.985072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.331 [2024-12-11 14:01:49.985088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:57.331 [2024-12-11 14:01:49.985098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:57.331 [2024-12-11 14:01:49.985111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.331 [2024-12-11 14:01:49.986109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3968.301 ms, result 0 00:20:57.331 { 00:20:57.331 "name": "ftl0", 00:20:57.331 "uuid": "1f2138cb-70a8-49f8-a5b1-8b0171158246" 00:20:57.331 } 00:20:57.331 14:01:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:57.331 14:01:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:20:57.331 14:01:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:20:57.331 14:01:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:57.331 [2024-12-11 14:01:50.306094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:57.331 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:57.331 Zero copy mechanism will not be used. 00:20:57.331 Running I/O for 4 seconds... 00:20:59.273 1519.00 IOPS, 100.87 MiB/s [2024-12-11T14:01:53.695Z] 1542.50 IOPS, 102.43 MiB/s [2024-12-11T14:01:54.632Z] 1578.00 IOPS, 104.79 MiB/s [2024-12-11T14:01:54.632Z] 1602.00 IOPS, 106.38 MiB/s 00:21:01.585 Latency(us) 00:21:01.585 [2024-12-11T14:01:54.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.585 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:01.585 ftl0 : 4.00 1601.36 106.34 0.00 0.00 656.55 233.59 4342.75 00:21:01.585 [2024-12-11T14:01:54.632Z] =================================================================================================================== 00:21:01.585 [2024-12-11T14:01:54.632Z] Total : 1601.36 106.34 0.00 0.00 656.55 233.59 4342.75 00:21:01.585 [2024-12-11 14:01:54.311336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:01.585 { 00:21:01.585 "results": [ 00:21:01.585 { 00:21:01.585 "job": "ftl0", 00:21:01.585 "core_mask": "0x1", 00:21:01.585 "workload": "randwrite", 00:21:01.585 "status": "finished", 00:21:01.585 "queue_depth": 1, 00:21:01.585 "io_size": 69632, 00:21:01.585 "runtime": 4.002227, 00:21:01.585 "iops": 1601.3584436864776, 00:21:01.585 "mibps": 106.34020915105515, 00:21:01.585 "io_failed": 0, 00:21:01.585 "io_timeout": 0, 00:21:01.585 "avg_latency_us": 656.5471883477112, 00:21:01.585 "min_latency_us": 233.5871485943775, 00:21:01.585 "max_latency_us": 4342.746987951808 00:21:01.585 } 00:21:01.585 ], 00:21:01.585 "core_count": 1 00:21:01.585 } 00:21:01.585 14:01:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:01.585 [2024-12-11 14:01:54.432586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:01.585 Running I/O for 4 seconds... 00:21:03.460 11401.00 IOPS, 44.54 MiB/s [2024-12-11T14:01:57.442Z] 11089.50 IOPS, 43.32 MiB/s [2024-12-11T14:01:58.819Z] 11000.67 IOPS, 42.97 MiB/s [2024-12-11T14:01:58.819Z] 10707.00 IOPS, 41.82 MiB/s 00:21:05.772 Latency(us) 00:21:05.772 [2024-12-11T14:01:58.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.772 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:05.772 ftl0 : 4.02 10694.79 41.78 0.00 0.00 11943.26 230.30 26740.79 00:21:05.772 [2024-12-11T14:01:58.819Z] =================================================================================================================== 00:21:05.772 [2024-12-11T14:01:58.819Z] Total : 10694.79 41.78 0.00 0.00 11943.26 0.00 26740.79 00:21:05.772 { 00:21:05.772 "results": [ 00:21:05.772 { 00:21:05.772 "job": "ftl0", 00:21:05.772 "core_mask": "0x1", 00:21:05.772 "workload": "randwrite", 00:21:05.772 "status": "finished", 00:21:05.772 "queue_depth": 128, 00:21:05.772 "io_size": 4096, 00:21:05.772 "runtime": 4.016256, 00:21:05.772 "iops": 10694.786388118686, 00:21:05.772 "mibps": 41.77650932858862, 00:21:05.772 "io_failed": 0, 00:21:05.772 "io_timeout": 0, 00:21:05.772 "avg_latency_us": 11943.255017452999, 00:21:05.772 "min_latency_us": 230.29718875502007, 00:21:05.772 "max_latency_us": 26740.79357429719 00:21:05.772 } 00:21:05.772 ], 00:21:05.772 "core_count": 1 00:21:05.772 } 00:21:05.772 [2024-12-11 14:01:58.452540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:05.772 14:01:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:05.772 [2024-12-11 14:01:58.572273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:05.772 Running I/O for 4 seconds... 00:21:07.684 8832.00 IOPS, 34.50 MiB/s [2024-12-11T14:02:01.666Z] 8923.50 IOPS, 34.86 MiB/s [2024-12-11T14:02:02.603Z] 8933.67 IOPS, 34.90 MiB/s [2024-12-11T14:02:02.603Z] 8965.50 IOPS, 35.02 MiB/s 00:21:09.556 Latency(us) 00:21:09.556 [2024-12-11T14:02:02.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.556 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:09.556 Verification LBA range: start 0x0 length 0x1400000 00:21:09.556 ftl0 : 4.01 8976.13 35.06 0.00 0.00 14217.44 250.04 22213.81 00:21:09.556 [2024-12-11T14:02:02.603Z] =================================================================================================================== 00:21:09.556 [2024-12-11T14:02:02.603Z] Total : 8976.13 35.06 0.00 0.00 14217.44 0.00 22213.81 00:21:09.556 { 00:21:09.556 "results": [ 00:21:09.556 { 00:21:09.556 "job": "ftl0", 00:21:09.556 "core_mask": "0x1", 00:21:09.556 "workload": "verify", 00:21:09.556 "status": "finished", 00:21:09.556 "verify_range": { 00:21:09.556 "start": 0, 00:21:09.556 "length": 20971520 00:21:09.556 }, 00:21:09.556 "queue_depth": 128, 00:21:09.556 "io_size": 4096, 00:21:09.556 "runtime": 4.009523, 00:21:09.556 "iops": 8976.13007831605, 00:21:09.556 "mibps": 35.06300811842207, 00:21:09.556 "io_failed": 0, 00:21:09.556 "io_timeout": 0, 00:21:09.556 "avg_latency_us": 14217.441237470028, 00:21:09.556 "min_latency_us": 250.03694779116466, 00:21:09.556 "max_latency_us": 22213.808835341366 00:21:09.556 } 00:21:09.556 ], 00:21:09.556 "core_count": 1 00:21:09.556 } 00:21:09.556 [2024-12-11 14:02:02.594294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:09.815 14:02:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:09.815 [2024-12-11 14:02:02.805459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.815 [2024-12-11 14:02:02.805514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:09.815 [2024-12-11 14:02:02.805530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:09.815 [2024-12-11 14:02:02.805543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.815 [2024-12-11 14:02:02.805566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:09.815 [2024-12-11 14:02:02.809654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.815 [2024-12-11 14:02:02.809681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:09.815 [2024-12-11 14:02:02.809695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.074 ms 00:21:09.815 [2024-12-11 14:02:02.809706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.815 [2024-12-11 14:02:02.811629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.815 [2024-12-11 14:02:02.811665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:09.815 [2024-12-11 14:02:02.811681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:21:09.815 [2024-12-11 14:02:02.811695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.074 [2024-12-11 14:02:03.024321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.074 [2024-12-11 14:02:03.024368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:10.074 [2024-12-11 14:02:03.024392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 212.944 ms 00:21:10.074 [2024-12-11 14:02:03.024415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.074 [2024-12-11 14:02:03.029384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.074 [2024-12-11 14:02:03.029414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:10.074 [2024-12-11 14:02:03.029428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.934 ms 00:21:10.074 [2024-12-11 14:02:03.029441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.074 [2024-12-11 14:02:03.065307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.074 [2024-12-11 14:02:03.065341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:10.074 [2024-12-11 14:02:03.065357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.844 ms 00:21:10.074 [2024-12-11 14:02:03.065367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.074 [2024-12-11 14:02:03.086601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.074 [2024-12-11 14:02:03.086639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:10.074 [2024-12-11 14:02:03.086656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.226 ms 00:21:10.074 [2024-12-11 14:02:03.086666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.074 [2024-12-11 14:02:03.086848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.074 [2024-12-11 14:02:03.086864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:10.074 [2024-12-11 14:02:03.086882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:21:10.074 [2024-12-11 14:02:03.086891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.334 [2024-12-11 14:02:03.122585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.334 [2024-12-11 14:02:03.122617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:10.334 [2024-12-11 14:02:03.122633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.730 ms 00:21:10.334 [2024-12-11 14:02:03.122659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.334 [2024-12-11 14:02:03.159481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.334 [2024-12-11 14:02:03.159513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:10.334 [2024-12-11 14:02:03.159528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.842 ms 00:21:10.334 [2024-12-11 14:02:03.159538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.334 [2024-12-11 14:02:03.194991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.334 [2024-12-11 14:02:03.195022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:10.334 [2024-12-11 14:02:03.195037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.470 ms 00:21:10.334 [2024-12-11 14:02:03.195047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.334 [2024-12-11 14:02:03.229738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.334 [2024-12-11 14:02:03.229769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:10.334 [2024-12-11 14:02:03.229802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.660 ms 00:21:10.334 [2024-12-11 14:02:03.229811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.334 [2024-12-11 14:02:03.229858] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:10.334 [2024-12-11 14:02:03.229890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.229911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.229922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.229936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.229946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.229959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.229970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.229983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.229993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:10.334 [2024-12-11 14:02:03.230689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.230996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:10.335 [2024-12-11 14:02:03.231156] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:10.335 [2024-12-11 14:02:03.231168] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f2138cb-70a8-49f8-a5b1-8b0171158246 00:21:10.335 [2024-12-11 14:02:03.231198] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:10.335 [2024-12-11 14:02:03.231211] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:10.335 [2024-12-11 14:02:03.231221] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:10.335 [2024-12-11 14:02:03.231234] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:10.335 [2024-12-11 14:02:03.231243] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:10.335 [2024-12-11 14:02:03.231256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:10.335 [2024-12-11 14:02:03.231266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:10.335 [2024-12-11 14:02:03.231279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:10.335 [2024-12-11 14:02:03.231288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:10.335 [2024-12-11 14:02:03.231301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.335 [2024-12-11 14:02:03.231311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:10.335 [2024-12-11 14:02:03.231324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:21:10.335 [2024-12-11 14:02:03.231334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.335 [2024-12-11 14:02:03.250889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.335 [2024-12-11 14:02:03.250920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:10.335 [2024-12-11 14:02:03.250950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.516 ms 00:21:10.335 [2024-12-11 14:02:03.250960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.335 [2024-12-11 14:02:03.251453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.335 [2024-12-11 14:02:03.251469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:10.335 [2024-12-11 14:02:03.251483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:21:10.335 [2024-12-11 14:02:03.251492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.335 [2024-12-11 14:02:03.304975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.335 [2024-12-11 14:02:03.305009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:10.335 [2024-12-11 14:02:03.305027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.335 [2024-12-11 14:02:03.305037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.335 [2024-12-11 14:02:03.305093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.335 [2024-12-11 14:02:03.305104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:10.335 [2024-12-11 14:02:03.305117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.335 [2024-12-11 14:02:03.305127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.335 [2024-12-11 14:02:03.305204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.335 [2024-12-11 14:02:03.305217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:10.335 [2024-12-11 14:02:03.305230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.335 [2024-12-11 14:02:03.305239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.335 [2024-12-11 14:02:03.305259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.335 [2024-12-11 14:02:03.305269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:10.335 [2024-12-11 14:02:03.305281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.335 [2024-12-11 14:02:03.305291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-11 14:02:03.423744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.594 [2024-12-11 14:02:03.423798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:10.594 [2024-12-11 14:02:03.423834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.594 [2024-12-11 14:02:03.423851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-11 14:02:03.519618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.594 [2024-12-11 14:02:03.519667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:10.594 [2024-12-11 14:02:03.519699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.594 [2024-12-11 14:02:03.519709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-11 14:02:03.519820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.594 [2024-12-11 14:02:03.519832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:10.594 [2024-12-11 14:02:03.519864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.594 [2024-12-11 14:02:03.519874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-11 14:02:03.519926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.594 [2024-12-11 14:02:03.519954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:10.594 [2024-12-11 14:02:03.519967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.594 [2024-12-11 14:02:03.519977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-11 14:02:03.520096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.594 [2024-12-11 14:02:03.520112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:10.594 [2024-12-11 14:02:03.520128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.594 [2024-12-11 14:02:03.520138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-11 14:02:03.520187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.594 [2024-12-11 14:02:03.520203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:10.594 [2024-12-11 14:02:03.520217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.594 [2024-12-11 14:02:03.520232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-11 14:02:03.520272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.594 [2024-12-11 14:02:03.520285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:10.594 [2024-12-11 14:02:03.520298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.594 [2024-12-11 14:02:03.520318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-11 14:02:03.520380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:10.594 [2024-12-11 14:02:03.520395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:10.594 [2024-12-11 14:02:03.520408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:10.594 [2024-12-11 14:02:03.520418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.594 [2024-12-11 14:02:03.520592] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 716.217 ms, result 0 00:21:10.594 true 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78830 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78830 ']' 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78830 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78830 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.594 killing process with pid 78830 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78830' 00:21:10.594 Received shutdown signal, test time was about 4.000000 seconds 00:21:10.594 00:21:10.594 Latency(us) 00:21:10.594 [2024-12-11T14:02:03.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.594 [2024-12-11T14:02:03.641Z] =================================================================================================================== 00:21:10.594 [2024-12-11T14:02:03.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78830 00:21:10.594 14:02:03 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78830 00:21:14.786 Remove shared memory files 00:21:14.786 14:02:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:14.786 14:02:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:21:14.786 14:02:07 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:14.786 14:02:07 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:14.786 14:02:07 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:14.787 14:02:07 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:14.787 14:02:07 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:14.787 14:02:07 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:14.787 00:21:14.787 real 0m25.316s 00:21:14.787 user 0m27.947s 00:21:14.787 sys 0m1.222s 00:21:14.787 14:02:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.787 14:02:07 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:14.787 ************************************ 00:21:14.787 END TEST ftl_bdevperf 00:21:14.787 ************************************ 00:21:14.787 14:02:07 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:14.787 14:02:07 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:14.787 14:02:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.787 14:02:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:14.787 ************************************ 00:21:14.787 START TEST ftl_trim 00:21:14.787 ************************************ 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:14.787 * Looking for test storage... 00:21:14.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.787 14:02:07 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:14.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.787 --rc genhtml_branch_coverage=1 00:21:14.787 --rc genhtml_function_coverage=1 00:21:14.787 --rc genhtml_legend=1 00:21:14.787 --rc geninfo_all_blocks=1 00:21:14.787 --rc geninfo_unexecuted_blocks=1 00:21:14.787 00:21:14.787 ' 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:14.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.787 --rc genhtml_branch_coverage=1 00:21:14.787 --rc genhtml_function_coverage=1 00:21:14.787 --rc genhtml_legend=1 00:21:14.787 --rc geninfo_all_blocks=1 00:21:14.787 --rc geninfo_unexecuted_blocks=1 00:21:14.787 00:21:14.787 ' 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:14.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.787 --rc genhtml_branch_coverage=1 00:21:14.787 --rc genhtml_function_coverage=1 00:21:14.787 --rc genhtml_legend=1 00:21:14.787 --rc geninfo_all_blocks=1 00:21:14.787 --rc geninfo_unexecuted_blocks=1 00:21:14.787 00:21:14.787 ' 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:14.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.787 --rc genhtml_branch_coverage=1 00:21:14.787 --rc genhtml_function_coverage=1 00:21:14.787 --rc genhtml_legend=1 00:21:14.787 --rc geninfo_all_blocks=1 00:21:14.787 --rc geninfo_unexecuted_blocks=1 00:21:14.787 00:21:14.787 ' 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79192 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79192 00:21:14.787 14:02:07 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79192 ']' 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.787 14:02:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:14.787 [2024-12-11 14:02:07.541492] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:14.787 [2024-12-11 14:02:07.541649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79192 ] 00:21:14.787 [2024-12-11 14:02:07.716043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:14.787 [2024-12-11 14:02:07.827425] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.787 [2024-12-11 14:02:07.827555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.787 [2024-12-11 14:02:07.827590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.770 14:02:08 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.770 14:02:08 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:15.770 14:02:08 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:15.770 14:02:08 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:15.770 14:02:08 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:15.770 14:02:08 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:15.770 14:02:08 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:15.770 14:02:08 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:16.029 14:02:09 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:16.029 14:02:09 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:16.029 14:02:09 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:16.029 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:16.029 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:16.029 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:16.029 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:16.029 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:16.289 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:16.289 { 00:21:16.289 "name": "nvme0n1", 00:21:16.289 "aliases": [ 00:21:16.289 "1300013c-acc8-4ce3-8e32-e6707b252d5c" 00:21:16.289 ], 00:21:16.289 "product_name": "NVMe disk", 00:21:16.289 "block_size": 4096, 00:21:16.289 "num_blocks": 1310720, 00:21:16.289 "uuid": "1300013c-acc8-4ce3-8e32-e6707b252d5c", 00:21:16.289 "numa_id": -1, 00:21:16.289 "assigned_rate_limits": { 00:21:16.289 "rw_ios_per_sec": 0, 00:21:16.289 "rw_mbytes_per_sec": 0, 00:21:16.289 "r_mbytes_per_sec": 0, 00:21:16.289 "w_mbytes_per_sec": 0 00:21:16.289 }, 00:21:16.289 "claimed": true, 00:21:16.289 "claim_type": "read_many_write_one", 00:21:16.289 "zoned": false, 00:21:16.289 "supported_io_types": { 00:21:16.289 "read": true, 00:21:16.289 "write": true, 00:21:16.289 "unmap": true, 00:21:16.289 "flush": true, 00:21:16.289 "reset": true, 00:21:16.289 "nvme_admin": true, 00:21:16.289 "nvme_io": true, 00:21:16.289 "nvme_io_md": false, 00:21:16.289 "write_zeroes": true, 00:21:16.289 "zcopy": false, 00:21:16.289 "get_zone_info": false, 00:21:16.289 "zone_management": false, 00:21:16.289 "zone_append": false, 00:21:16.289 "compare": true, 00:21:16.289 "compare_and_write": false, 00:21:16.289 "abort": true, 00:21:16.289 "seek_hole": false, 00:21:16.289 "seek_data": false, 00:21:16.289 "copy": true, 00:21:16.289 "nvme_iov_md": false 00:21:16.289 }, 00:21:16.289 "driver_specific": { 00:21:16.289 "nvme": [ 00:21:16.289 { 00:21:16.289 "pci_address": "0000:00:11.0", 00:21:16.289 "trid": { 00:21:16.289 "trtype": "PCIe", 00:21:16.289 "traddr": "0000:00:11.0" 00:21:16.289 }, 00:21:16.290 "ctrlr_data": { 00:21:16.290 "cntlid": 0, 00:21:16.290 "vendor_id": "0x1b36", 00:21:16.290 "model_number": "QEMU NVMe Ctrl", 00:21:16.290 "serial_number": "12341", 00:21:16.290 "firmware_revision": "8.0.0", 00:21:16.290 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:16.290 "oacs": { 00:21:16.290 "security": 0, 00:21:16.290 "format": 1, 00:21:16.290 "firmware": 0, 00:21:16.290 "ns_manage": 1 00:21:16.290 }, 00:21:16.290 "multi_ctrlr": false, 00:21:16.290 "ana_reporting": false 00:21:16.290 }, 00:21:16.290 "vs": { 00:21:16.290 "nvme_version": "1.4" 00:21:16.290 }, 00:21:16.290 "ns_data": { 00:21:16.290 "id": 1, 00:21:16.290 "can_share": false 00:21:16.290 } 00:21:16.290 } 00:21:16.290 ], 00:21:16.290 "mp_policy": "active_passive" 00:21:16.290 } 00:21:16.290 } 00:21:16.290 ]' 00:21:16.290 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:16.290 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:16.290 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:16.290 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:16.290 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:16.290 14:02:09 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:21:16.290 14:02:09 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:16.290 14:02:09 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:16.290 14:02:09 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:16.290 14:02:09 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:16.290 14:02:09 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:16.549 14:02:09 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=722668ac-0472-43e7-a58c-5e76d2fde069 00:21:16.549 14:02:09 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:16.549 14:02:09 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 722668ac-0472-43e7-a58c-5e76d2fde069 00:21:16.808 14:02:09 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:17.066 14:02:09 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=62daa9e1-bd5d-41b4-8276-589c7608d036 00:21:17.066 14:02:09 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 62daa9e1-bd5d-41b4-8276-589c7608d036 00:21:17.325 14:02:10 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:17.325 14:02:10 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:17.325 14:02:10 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:17.325 14:02:10 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:17.325 14:02:10 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:17.325 14:02:10 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:17.325 14:02:10 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:17.325 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:17.325 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:17.325 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:17.325 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:17.325 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:17.325 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:17.325 { 00:21:17.325 "name": "5bebf46c-3543-475c-9367-92d5e40b2c8f", 00:21:17.325 "aliases": [ 00:21:17.325 "lvs/nvme0n1p0" 00:21:17.325 ], 00:21:17.325 "product_name": "Logical Volume", 00:21:17.325 "block_size": 4096, 00:21:17.325 "num_blocks": 26476544, 00:21:17.325 "uuid": "5bebf46c-3543-475c-9367-92d5e40b2c8f", 00:21:17.325 "assigned_rate_limits": { 00:21:17.325 "rw_ios_per_sec": 0, 00:21:17.325 "rw_mbytes_per_sec": 0, 00:21:17.325 "r_mbytes_per_sec": 0, 00:21:17.326 "w_mbytes_per_sec": 0 00:21:17.326 }, 00:21:17.326 "claimed": false, 00:21:17.326 "zoned": false, 00:21:17.326 "supported_io_types": { 00:21:17.326 "read": true, 00:21:17.326 "write": true, 00:21:17.326 "unmap": true, 00:21:17.326 "flush": false, 00:21:17.326 "reset": true, 00:21:17.326 "nvme_admin": false, 00:21:17.326 "nvme_io": false, 00:21:17.326 "nvme_io_md": false, 00:21:17.326 "write_zeroes": true, 00:21:17.326 "zcopy": false, 00:21:17.326 "get_zone_info": false, 00:21:17.326 "zone_management": false, 00:21:17.326 "zone_append": false, 00:21:17.326 "compare": false, 00:21:17.326 "compare_and_write": false, 00:21:17.326 "abort": false, 00:21:17.326 "seek_hole": true, 00:21:17.326 "seek_data": true, 00:21:17.326 "copy": false, 00:21:17.326 "nvme_iov_md": false 00:21:17.326 }, 00:21:17.326 "driver_specific": { 00:21:17.326 "lvol": { 00:21:17.326 "lvol_store_uuid": "62daa9e1-bd5d-41b4-8276-589c7608d036", 00:21:17.326 "base_bdev": "nvme0n1", 00:21:17.326 "thin_provision": true, 00:21:17.326 "num_allocated_clusters": 0, 00:21:17.326 "snapshot": false, 00:21:17.326 "clone": false, 00:21:17.326 "esnap_clone": false 00:21:17.326 } 00:21:17.326 } 00:21:17.326 } 00:21:17.326 ]' 00:21:17.326 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:17.584 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:17.584 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:17.584 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:17.584 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:17.584 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:17.584 14:02:10 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:17.584 14:02:10 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:17.584 14:02:10 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:17.843 14:02:10 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:17.843 14:02:10 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:17.843 14:02:10 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:17.843 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:17.843 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:17.844 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:17.844 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:17.844 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:18.102 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:18.102 { 00:21:18.102 "name": "5bebf46c-3543-475c-9367-92d5e40b2c8f", 00:21:18.102 "aliases": [ 00:21:18.102 "lvs/nvme0n1p0" 00:21:18.102 ], 00:21:18.103 "product_name": "Logical Volume", 00:21:18.103 "block_size": 4096, 00:21:18.103 "num_blocks": 26476544, 00:21:18.103 "uuid": "5bebf46c-3543-475c-9367-92d5e40b2c8f", 00:21:18.103 "assigned_rate_limits": { 00:21:18.103 "rw_ios_per_sec": 0, 00:21:18.103 "rw_mbytes_per_sec": 0, 00:21:18.103 "r_mbytes_per_sec": 0, 00:21:18.103 "w_mbytes_per_sec": 0 00:21:18.103 }, 00:21:18.103 "claimed": false, 00:21:18.103 "zoned": false, 00:21:18.103 "supported_io_types": { 00:21:18.103 "read": true, 00:21:18.103 "write": true, 00:21:18.103 "unmap": true, 00:21:18.103 "flush": false, 00:21:18.103 "reset": true, 00:21:18.103 "nvme_admin": false, 00:21:18.103 "nvme_io": false, 00:21:18.103 "nvme_io_md": false, 00:21:18.103 "write_zeroes": true, 00:21:18.103 "zcopy": false, 00:21:18.103 "get_zone_info": false, 00:21:18.103 "zone_management": false, 00:21:18.103 "zone_append": false, 00:21:18.103 "compare": false, 00:21:18.103 "compare_and_write": false, 00:21:18.103 "abort": false, 00:21:18.103 "seek_hole": true, 00:21:18.103 "seek_data": true, 00:21:18.103 "copy": false, 00:21:18.103 "nvme_iov_md": false 00:21:18.103 }, 00:21:18.103 "driver_specific": { 00:21:18.103 "lvol": { 00:21:18.103 "lvol_store_uuid": "62daa9e1-bd5d-41b4-8276-589c7608d036", 00:21:18.103 "base_bdev": "nvme0n1", 00:21:18.103 "thin_provision": true, 00:21:18.103 "num_allocated_clusters": 0, 00:21:18.103 "snapshot": false, 00:21:18.103 "clone": false, 00:21:18.103 "esnap_clone": false 00:21:18.103 } 00:21:18.103 } 00:21:18.103 } 00:21:18.103 ]' 00:21:18.103 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:18.103 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:18.103 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:18.103 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:18.103 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:18.103 14:02:10 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:18.103 14:02:10 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:18.103 14:02:10 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:18.361 14:02:11 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:18.361 14:02:11 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:18.361 14:02:11 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:18.361 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:18.361 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:18.361 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:18.361 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:18.361 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5bebf46c-3543-475c-9367-92d5e40b2c8f 00:21:18.620 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:18.620 { 00:21:18.620 "name": "5bebf46c-3543-475c-9367-92d5e40b2c8f", 00:21:18.620 "aliases": [ 00:21:18.620 "lvs/nvme0n1p0" 00:21:18.620 ], 00:21:18.620 "product_name": "Logical Volume", 00:21:18.620 "block_size": 4096, 00:21:18.620 "num_blocks": 26476544, 00:21:18.620 "uuid": "5bebf46c-3543-475c-9367-92d5e40b2c8f", 00:21:18.620 "assigned_rate_limits": { 00:21:18.620 "rw_ios_per_sec": 0, 00:21:18.620 "rw_mbytes_per_sec": 0, 00:21:18.620 "r_mbytes_per_sec": 0, 00:21:18.620 "w_mbytes_per_sec": 0 00:21:18.620 }, 00:21:18.620 "claimed": false, 00:21:18.620 "zoned": false, 00:21:18.620 "supported_io_types": { 00:21:18.620 "read": true, 00:21:18.620 "write": true, 00:21:18.620 "unmap": true, 00:21:18.620 "flush": false, 00:21:18.620 "reset": true, 00:21:18.620 "nvme_admin": false, 00:21:18.620 "nvme_io": false, 00:21:18.620 "nvme_io_md": false, 00:21:18.620 "write_zeroes": true, 00:21:18.620 "zcopy": false, 00:21:18.620 "get_zone_info": false, 00:21:18.620 "zone_management": false, 00:21:18.620 "zone_append": false, 00:21:18.620 "compare": false, 00:21:18.620 "compare_and_write": false, 00:21:18.620 "abort": false, 00:21:18.620 "seek_hole": true, 00:21:18.620 "seek_data": true, 00:21:18.620 "copy": false, 00:21:18.620 "nvme_iov_md": false 00:21:18.620 }, 00:21:18.620 "driver_specific": { 00:21:18.620 "lvol": { 00:21:18.620 "lvol_store_uuid": "62daa9e1-bd5d-41b4-8276-589c7608d036", 00:21:18.620 "base_bdev": "nvme0n1", 00:21:18.620 "thin_provision": true, 00:21:18.620 "num_allocated_clusters": 0, 00:21:18.620 "snapshot": false, 00:21:18.620 "clone": false, 00:21:18.620 "esnap_clone": false 00:21:18.620 } 00:21:18.620 } 00:21:18.620 } 00:21:18.620 ]' 00:21:18.620 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:18.620 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:18.620 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:18.620 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:18.620 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:18.620 14:02:11 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:18.620 14:02:11 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:18.620 14:02:11 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5bebf46c-3543-475c-9367-92d5e40b2c8f -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:18.620 [2024-12-11 14:02:11.657284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.620 [2024-12-11 14:02:11.657334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:18.620 [2024-12-11 14:02:11.657355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:18.620 [2024-12-11 14:02:11.657366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.620 [2024-12-11 14:02:11.660685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.620 [2024-12-11 14:02:11.660722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.620 [2024-12-11 14:02:11.660737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.293 ms 00:21:18.620 [2024-12-11 14:02:11.660748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.620 [2024-12-11 14:02:11.660892] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:18.620 [2024-12-11 14:02:11.661861] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:18.620 [2024-12-11 14:02:11.661894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.620 [2024-12-11 14:02:11.661905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.620 [2024-12-11 14:02:11.661919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:21:18.621 [2024-12-11 14:02:11.661929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.621 [2024-12-11 14:02:11.662135] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b1fac6d1-37a7-4b75-a43a-f4195852c0c7 00:21:18.621 [2024-12-11 14:02:11.663531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.621 [2024-12-11 14:02:11.663563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:18.621 [2024-12-11 14:02:11.663576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:18.621 [2024-12-11 14:02:11.663589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.881 [2024-12-11 14:02:11.670974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.881 [2024-12-11 14:02:11.671008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.881 [2024-12-11 14:02:11.671022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.314 ms 00:21:18.881 [2024-12-11 14:02:11.671035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.881 [2024-12-11 14:02:11.671181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.881 [2024-12-11 14:02:11.671199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.881 [2024-12-11 14:02:11.671211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:21:18.881 [2024-12-11 14:02:11.671228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.881 [2024-12-11 14:02:11.671269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.881 [2024-12-11 14:02:11.671283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:18.881 [2024-12-11 14:02:11.671294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:18.881 [2024-12-11 14:02:11.671310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.881 [2024-12-11 14:02:11.671341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:18.881 [2024-12-11 14:02:11.676468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.881 [2024-12-11 14:02:11.676501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.881 [2024-12-11 14:02:11.676517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.138 ms 00:21:18.881 [2024-12-11 14:02:11.676527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.881 [2024-12-11 14:02:11.676605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.881 [2024-12-11 14:02:11.676634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:18.881 [2024-12-11 14:02:11.676648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:18.881 [2024-12-11 14:02:11.676658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.881 [2024-12-11 14:02:11.676694] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:18.881 [2024-12-11 14:02:11.676851] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:18.881 [2024-12-11 14:02:11.676872] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:18.881 [2024-12-11 14:02:11.676886] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:18.881 [2024-12-11 14:02:11.676901] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:18.881 [2024-12-11 14:02:11.676914] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:18.881 [2024-12-11 14:02:11.676928] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:18.881 [2024-12-11 14:02:11.676938] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:18.881 [2024-12-11 14:02:11.676952] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:18.881 [2024-12-11 14:02:11.676975] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:18.881 [2024-12-11 14:02:11.676988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.881 [2024-12-11 14:02:11.676998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:18.881 [2024-12-11 14:02:11.677011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:21:18.881 [2024-12-11 14:02:11.677022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.881 [2024-12-11 14:02:11.677109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.881 [2024-12-11 14:02:11.677120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:18.881 [2024-12-11 14:02:11.677133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:18.881 [2024-12-11 14:02:11.677144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.881 [2024-12-11 14:02:11.677272] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:18.881 [2024-12-11 14:02:11.677286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:18.881 [2024-12-11 14:02:11.677299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.881 [2024-12-11 14:02:11.677309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.881 [2024-12-11 14:02:11.677322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:18.881 [2024-12-11 14:02:11.677331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:18.881 [2024-12-11 14:02:11.677343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:18.881 [2024-12-11 14:02:11.677353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:18.881 [2024-12-11 14:02:11.677365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:18.881 [2024-12-11 14:02:11.677374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.881 [2024-12-11 14:02:11.677388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:18.881 [2024-12-11 14:02:11.677398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:18.881 [2024-12-11 14:02:11.677410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.881 [2024-12-11 14:02:11.677419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:18.881 [2024-12-11 14:02:11.677432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:18.881 [2024-12-11 14:02:11.677444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.881 [2024-12-11 14:02:11.677458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:18.881 [2024-12-11 14:02:11.677467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:18.881 [2024-12-11 14:02:11.677479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.881 [2024-12-11 14:02:11.677488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:18.881 [2024-12-11 14:02:11.677500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:18.881 [2024-12-11 14:02:11.677509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.881 [2024-12-11 14:02:11.677521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:18.881 [2024-12-11 14:02:11.677531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:18.881 [2024-12-11 14:02:11.677542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.881 [2024-12-11 14:02:11.677551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:18.881 [2024-12-11 14:02:11.677563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:18.881 [2024-12-11 14:02:11.677571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.881 [2024-12-11 14:02:11.677583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:18.881 [2024-12-11 14:02:11.677592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:18.881 [2024-12-11 14:02:11.677604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.881 [2024-12-11 14:02:11.677613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:18.881 [2024-12-11 14:02:11.677627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:18.881 [2024-12-11 14:02:11.677636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.881 [2024-12-11 14:02:11.677648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:18.881 [2024-12-11 14:02:11.677657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:18.881 [2024-12-11 14:02:11.677670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.881 [2024-12-11 14:02:11.677679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:18.881 [2024-12-11 14:02:11.677691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:18.881 [2024-12-11 14:02:11.677700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.882 [2024-12-11 14:02:11.677711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:18.882 [2024-12-11 14:02:11.677721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:18.882 [2024-12-11 14:02:11.677732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.882 [2024-12-11 14:02:11.677741] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:18.882 [2024-12-11 14:02:11.677754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:18.882 [2024-12-11 14:02:11.677764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.882 [2024-12-11 14:02:11.677776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.882 [2024-12-11 14:02:11.677788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:18.882 [2024-12-11 14:02:11.677802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:18.882 [2024-12-11 14:02:11.677811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:18.882 [2024-12-11 14:02:11.677832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:18.882 [2024-12-11 14:02:11.677842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:18.882 [2024-12-11 14:02:11.677855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:18.882 [2024-12-11 14:02:11.677881] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:18.882 [2024-12-11 14:02:11.677902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.882 [2024-12-11 14:02:11.677916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:18.882 [2024-12-11 14:02:11.677929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:18.882 [2024-12-11 14:02:11.677940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:18.882 [2024-12-11 14:02:11.677953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:18.882 [2024-12-11 14:02:11.677963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:18.882 [2024-12-11 14:02:11.677976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:18.882 [2024-12-11 14:02:11.677986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:18.882 [2024-12-11 14:02:11.678000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:18.882 [2024-12-11 14:02:11.678010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:18.882 [2024-12-11 14:02:11.678025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:18.882 [2024-12-11 14:02:11.678036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:18.882 [2024-12-11 14:02:11.678048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:18.882 [2024-12-11 14:02:11.678058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:18.882 [2024-12-11 14:02:11.678079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:18.882 [2024-12-11 14:02:11.678090] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:18.882 [2024-12-11 14:02:11.678109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.882 [2024-12-11 14:02:11.678120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:18.882 [2024-12-11 14:02:11.678134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:18.882 [2024-12-11 14:02:11.678145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:18.882 [2024-12-11 14:02:11.678159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:18.882 [2024-12-11 14:02:11.678171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.882 [2024-12-11 14:02:11.678183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:18.882 [2024-12-11 14:02:11.678194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:21:18.882 [2024-12-11 14:02:11.678207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.882 [2024-12-11 14:02:11.678293] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:18.882 [2024-12-11 14:02:11.678311] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:22.171 [2024-12-11 14:02:14.910301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.171 [2024-12-11 14:02:14.910364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:22.171 [2024-12-11 14:02:14.910381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3237.251 ms 00:21:22.171 [2024-12-11 14:02:14.910395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.171 [2024-12-11 14:02:14.949350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.171 [2024-12-11 14:02:14.949404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.171 [2024-12-11 14:02:14.949420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.544 ms 00:21:22.171 [2024-12-11 14:02:14.949433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.171 [2024-12-11 14:02:14.949571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.171 [2024-12-11 14:02:14.949587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:22.171 [2024-12-11 14:02:14.949618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:22.171 [2024-12-11 14:02:14.949635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.171 [2024-12-11 14:02:15.010881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.171 [2024-12-11 14:02:15.010928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.171 [2024-12-11 14:02:15.010944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.307 ms 00:21:22.171 [2024-12-11 14:02:15.010959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.171 [2024-12-11 14:02:15.011073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.171 [2024-12-11 14:02:15.011091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.171 [2024-12-11 14:02:15.011102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:22.171 [2024-12-11 14:02:15.011115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.172 [2024-12-11 14:02:15.011559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.172 [2024-12-11 14:02:15.011583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.172 [2024-12-11 14:02:15.011594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:21:22.172 [2024-12-11 14:02:15.011606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.172 [2024-12-11 14:02:15.011720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.172 [2024-12-11 14:02:15.011733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.172 [2024-12-11 14:02:15.011758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:22.172 [2024-12-11 14:02:15.011774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.172 [2024-12-11 14:02:15.032989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.172 [2024-12-11 14:02:15.033032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.172 [2024-12-11 14:02:15.033046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.214 ms 00:21:22.172 [2024-12-11 14:02:15.033059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.172 [2024-12-11 14:02:15.045530] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:22.172 [2024-12-11 14:02:15.061945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.172 [2024-12-11 14:02:15.061993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:22.172 [2024-12-11 14:02:15.062011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.811 ms 00:21:22.172 [2024-12-11 14:02:15.062022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.172 [2024-12-11 14:02:15.153933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.172 [2024-12-11 14:02:15.153995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:22.172 [2024-12-11 14:02:15.154015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.944 ms 00:21:22.172 [2024-12-11 14:02:15.154025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.172 [2024-12-11 14:02:15.154297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.172 [2024-12-11 14:02:15.154314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:22.172 [2024-12-11 14:02:15.154331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:21:22.172 [2024-12-11 14:02:15.154342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.172 [2024-12-11 14:02:15.190904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.172 [2024-12-11 14:02:15.190954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:22.172 [2024-12-11 14:02:15.190972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.577 ms 00:21:22.172 [2024-12-11 14:02:15.190982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.430 [2024-12-11 14:02:15.226923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.430 [2024-12-11 14:02:15.226955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:22.430 [2024-12-11 14:02:15.226972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.909 ms 00:21:22.430 [2024-12-11 14:02:15.226983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.430 [2024-12-11 14:02:15.227706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.430 [2024-12-11 14:02:15.227727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:22.430 [2024-12-11 14:02:15.227741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:21:22.430 [2024-12-11 14:02:15.227752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.430 [2024-12-11 14:02:15.333151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.430 [2024-12-11 14:02:15.333194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:22.430 [2024-12-11 14:02:15.333214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.408 ms 00:21:22.430 [2024-12-11 14:02:15.333225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.430 [2024-12-11 14:02:15.370920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.430 [2024-12-11 14:02:15.370959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:22.430 [2024-12-11 14:02:15.370976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.642 ms 00:21:22.430 [2024-12-11 14:02:15.370987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.430 [2024-12-11 14:02:15.408036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.430 [2024-12-11 14:02:15.408073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:22.430 [2024-12-11 14:02:15.408090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.019 ms 00:21:22.430 [2024-12-11 14:02:15.408101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.430 [2024-12-11 14:02:15.445255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.430 [2024-12-11 14:02:15.445312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:22.430 [2024-12-11 14:02:15.445330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.124 ms 00:21:22.430 [2024-12-11 14:02:15.445340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.430 [2024-12-11 14:02:15.445431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.430 [2024-12-11 14:02:15.445446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:22.430 [2024-12-11 14:02:15.445462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:22.430 [2024-12-11 14:02:15.445472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.430 [2024-12-11 14:02:15.445558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.430 [2024-12-11 14:02:15.445569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:22.430 [2024-12-11 14:02:15.445582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:22.430 [2024-12-11 14:02:15.445592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.430 [2024-12-11 14:02:15.446527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:22.430 [2024-12-11 14:02:15.450752] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3795.126 ms, result 0 00:21:22.430 [2024-12-11 14:02:15.451660] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.430 { 00:21:22.430 "name": "ftl0", 00:21:22.430 "uuid": "b1fac6d1-37a7-4b75-a43a-f4195852c0c7" 00:21:22.430 } 00:21:22.690 14:02:15 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:22.691 14:02:15 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:22.691 14:02:15 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:22.691 14:02:15 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:21:22.691 14:02:15 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:22.691 14:02:15 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:22.691 14:02:15 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:22.691 14:02:15 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:22.951 [ 00:21:22.951 { 00:21:22.951 "name": "ftl0", 00:21:22.951 "aliases": [ 00:21:22.951 "b1fac6d1-37a7-4b75-a43a-f4195852c0c7" 00:21:22.951 ], 00:21:22.951 "product_name": "FTL disk", 00:21:22.951 "block_size": 4096, 00:21:22.951 "num_blocks": 23592960, 00:21:22.951 "uuid": "b1fac6d1-37a7-4b75-a43a-f4195852c0c7", 00:21:22.951 "assigned_rate_limits": { 00:21:22.951 "rw_ios_per_sec": 0, 00:21:22.951 "rw_mbytes_per_sec": 0, 00:21:22.951 "r_mbytes_per_sec": 0, 00:21:22.951 "w_mbytes_per_sec": 0 00:21:22.951 }, 00:21:22.951 "claimed": false, 00:21:22.951 "zoned": false, 00:21:22.951 "supported_io_types": { 00:21:22.951 "read": true, 00:21:22.951 "write": true, 00:21:22.951 "unmap": true, 00:21:22.951 "flush": true, 00:21:22.951 "reset": false, 00:21:22.951 "nvme_admin": false, 00:21:22.951 "nvme_io": false, 00:21:22.951 "nvme_io_md": false, 00:21:22.951 "write_zeroes": true, 00:21:22.951 "zcopy": false, 00:21:22.951 "get_zone_info": false, 00:21:22.951 "zone_management": false, 00:21:22.951 "zone_append": false, 00:21:22.951 "compare": false, 00:21:22.951 "compare_and_write": false, 00:21:22.951 "abort": false, 00:21:22.951 "seek_hole": false, 00:21:22.951 "seek_data": false, 00:21:22.951 "copy": false, 00:21:22.951 "nvme_iov_md": false 00:21:22.951 }, 00:21:22.951 "driver_specific": { 00:21:22.951 "ftl": { 00:21:22.951 "base_bdev": "5bebf46c-3543-475c-9367-92d5e40b2c8f", 00:21:22.951 "cache": "nvc0n1p0" 00:21:22.951 } 00:21:22.951 } 00:21:22.951 } 00:21:22.951 ] 00:21:22.951 14:02:15 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:21:22.951 14:02:15 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:22.951 14:02:15 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:23.213 14:02:16 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:23.213 14:02:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:23.472 14:02:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:23.472 { 00:21:23.472 "name": "ftl0", 00:21:23.472 "aliases": [ 00:21:23.472 "b1fac6d1-37a7-4b75-a43a-f4195852c0c7" 00:21:23.472 ], 00:21:23.472 "product_name": "FTL disk", 00:21:23.472 "block_size": 4096, 00:21:23.472 "num_blocks": 23592960, 00:21:23.472 "uuid": "b1fac6d1-37a7-4b75-a43a-f4195852c0c7", 00:21:23.472 "assigned_rate_limits": { 00:21:23.472 "rw_ios_per_sec": 0, 00:21:23.472 "rw_mbytes_per_sec": 0, 00:21:23.472 "r_mbytes_per_sec": 0, 00:21:23.472 "w_mbytes_per_sec": 0 00:21:23.472 }, 00:21:23.472 "claimed": false, 00:21:23.472 "zoned": false, 00:21:23.472 "supported_io_types": { 00:21:23.472 "read": true, 00:21:23.472 "write": true, 00:21:23.472 "unmap": true, 00:21:23.472 "flush": true, 00:21:23.472 "reset": false, 00:21:23.472 "nvme_admin": false, 00:21:23.472 "nvme_io": false, 00:21:23.472 "nvme_io_md": false, 00:21:23.472 "write_zeroes": true, 00:21:23.472 "zcopy": false, 00:21:23.472 "get_zone_info": false, 00:21:23.472 "zone_management": false, 00:21:23.472 "zone_append": false, 00:21:23.472 "compare": false, 00:21:23.472 "compare_and_write": false, 00:21:23.472 "abort": false, 00:21:23.472 "seek_hole": false, 00:21:23.472 "seek_data": false, 00:21:23.472 "copy": false, 00:21:23.472 "nvme_iov_md": false 00:21:23.472 }, 00:21:23.472 "driver_specific": { 00:21:23.472 "ftl": { 00:21:23.472 "base_bdev": "5bebf46c-3543-475c-9367-92d5e40b2c8f", 00:21:23.472 "cache": "nvc0n1p0" 00:21:23.472 } 00:21:23.472 } 00:21:23.472 } 00:21:23.472 ]' 00:21:23.472 14:02:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:23.472 14:02:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:23.472 14:02:16 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:23.472 [2024-12-11 14:02:16.506539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.472 [2024-12-11 14:02:16.506590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:23.472 [2024-12-11 14:02:16.506610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:23.472 [2024-12-11 14:02:16.506626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.472 [2024-12-11 14:02:16.506664] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:23.472 [2024-12-11 14:02:16.510995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.472 [2024-12-11 14:02:16.511024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:23.472 [2024-12-11 14:02:16.511043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:21:23.472 [2024-12-11 14:02:16.511053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.472 [2024-12-11 14:02:16.511602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.473 [2024-12-11 14:02:16.511621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:23.473 [2024-12-11 14:02:16.511635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:21:23.473 [2024-12-11 14:02:16.511646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.473 [2024-12-11 14:02:16.514480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.473 [2024-12-11 14:02:16.514504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:23.473 [2024-12-11 14:02:16.514518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.807 ms 00:21:23.473 [2024-12-11 14:02:16.514528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.733 [2024-12-11 14:02:16.520187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.733 [2024-12-11 14:02:16.520216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:23.733 [2024-12-11 14:02:16.520230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.632 ms 00:21:23.733 [2024-12-11 14:02:16.520240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.733 [2024-12-11 14:02:16.557341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.733 [2024-12-11 14:02:16.557376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:23.733 [2024-12-11 14:02:16.557395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.058 ms 00:21:23.733 [2024-12-11 14:02:16.557405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.733 [2024-12-11 14:02:16.578798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.733 [2024-12-11 14:02:16.578839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:23.733 [2024-12-11 14:02:16.578873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.343 ms 00:21:23.733 [2024-12-11 14:02:16.578886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.733 [2024-12-11 14:02:16.579115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.733 [2024-12-11 14:02:16.579129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:23.733 [2024-12-11 14:02:16.579142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:21:23.733 [2024-12-11 14:02:16.579152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.733 [2024-12-11 14:02:16.614396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.733 [2024-12-11 14:02:16.614429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:23.733 [2024-12-11 14:02:16.614445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.264 ms 00:21:23.733 [2024-12-11 14:02:16.614454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.733 [2024-12-11 14:02:16.650555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.733 [2024-12-11 14:02:16.650587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:23.733 [2024-12-11 14:02:16.650605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.057 ms 00:21:23.733 [2024-12-11 14:02:16.650615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.733 [2024-12-11 14:02:16.686639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.733 [2024-12-11 14:02:16.686670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:23.733 [2024-12-11 14:02:16.686685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.994 ms 00:21:23.733 [2024-12-11 14:02:16.686694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.733 [2024-12-11 14:02:16.722333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.733 [2024-12-11 14:02:16.722363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:23.733 [2024-12-11 14:02:16.722378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.561 ms 00:21:23.733 [2024-12-11 14:02:16.722388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.733 [2024-12-11 14:02:16.722486] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:23.733 [2024-12-11 14:02:16.722504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:23.733 [2024-12-11 14:02:16.722923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.722934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.722947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.722957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.722970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.722981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.722996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:23.734 [2024-12-11 14:02:16.723756] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:23.734 [2024-12-11 14:02:16.723771] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1fac6d1-37a7-4b75-a43a-f4195852c0c7 00:21:23.734 [2024-12-11 14:02:16.723782] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:23.734 [2024-12-11 14:02:16.723794] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:23.734 [2024-12-11 14:02:16.723804] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:23.734 [2024-12-11 14:02:16.723819] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:23.734 [2024-12-11 14:02:16.723837] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:23.734 [2024-12-11 14:02:16.723849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:23.734 [2024-12-11 14:02:16.723859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:23.734 [2024-12-11 14:02:16.723870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:23.734 [2024-12-11 14:02:16.723880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:23.734 [2024-12-11 14:02:16.723892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.734 [2024-12-11 14:02:16.723902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:23.734 [2024-12-11 14:02:16.723915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:21:23.734 [2024-12-11 14:02:16.723925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.734 [2024-12-11 14:02:16.743677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.734 [2024-12-11 14:02:16.743706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:23.734 [2024-12-11 14:02:16.743724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.745 ms 00:21:23.734 [2024-12-11 14:02:16.743734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.734 [2024-12-11 14:02:16.744337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.734 [2024-12-11 14:02:16.744355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:23.734 [2024-12-11 14:02:16.744369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:21:23.734 [2024-12-11 14:02:16.744379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.994 [2024-12-11 14:02:16.813172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.994 [2024-12-11 14:02:16.813211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.994 [2024-12-11 14:02:16.813228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.994 [2024-12-11 14:02:16.813239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.994 [2024-12-11 14:02:16.813343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.994 [2024-12-11 14:02:16.813356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.994 [2024-12-11 14:02:16.813369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.994 [2024-12-11 14:02:16.813379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.994 [2024-12-11 14:02:16.813455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.994 [2024-12-11 14:02:16.813468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.994 [2024-12-11 14:02:16.813487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.994 [2024-12-11 14:02:16.813497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.994 [2024-12-11 14:02:16.813531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.994 [2024-12-11 14:02:16.813542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.994 [2024-12-11 14:02:16.813554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.994 [2024-12-11 14:02:16.813564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.994 [2024-12-11 14:02:16.942912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.994 [2024-12-11 14:02:16.942969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.994 [2024-12-11 14:02:16.943002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.994 [2024-12-11 14:02:16.943013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.254 [2024-12-11 14:02:17.043323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.254 [2024-12-11 14:02:17.043376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.254 [2024-12-11 14:02:17.043393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.254 [2024-12-11 14:02:17.043404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.254 [2024-12-11 14:02:17.043540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.254 [2024-12-11 14:02:17.043554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.254 [2024-12-11 14:02:17.043571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.254 [2024-12-11 14:02:17.043584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.254 [2024-12-11 14:02:17.043649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.254 [2024-12-11 14:02:17.043660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.254 [2024-12-11 14:02:17.043672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.254 [2024-12-11 14:02:17.043682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.254 [2024-12-11 14:02:17.043850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.254 [2024-12-11 14:02:17.043865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.254 [2024-12-11 14:02:17.043878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.254 [2024-12-11 14:02:17.043891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.254 [2024-12-11 14:02:17.043954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.254 [2024-12-11 14:02:17.043967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:24.254 [2024-12-11 14:02:17.043980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.254 [2024-12-11 14:02:17.043990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.254 [2024-12-11 14:02:17.044046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.254 [2024-12-11 14:02:17.044058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.254 [2024-12-11 14:02:17.044073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.254 [2024-12-11 14:02:17.044083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.254 [2024-12-11 14:02:17.044148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.254 [2024-12-11 14:02:17.044160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.254 [2024-12-11 14:02:17.044174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.254 [2024-12-11 14:02:17.044184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.254 [2024-12-11 14:02:17.044379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 538.686 ms, result 0 00:21:24.254 true 00:21:24.254 14:02:17 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79192 00:21:24.254 14:02:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79192 ']' 00:21:24.254 14:02:17 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79192 00:21:24.254 14:02:17 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:24.254 14:02:17 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.254 14:02:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79192 00:21:24.254 14:02:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.254 14:02:17 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.254 killing process with pid 79192 00:21:24.254 14:02:17 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79192' 00:21:24.254 14:02:17 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79192 00:21:24.254 14:02:17 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79192 00:21:29.527 14:02:21 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:30.104 65536+0 records in 00:21:30.104 65536+0 records out 00:21:30.104 268435456 bytes (268 MB, 256 MiB) copied, 1.02563 s, 262 MB/s 00:21:30.104 14:02:22 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:30.104 [2024-12-11 14:02:23.083714] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:30.104 [2024-12-11 14:02:23.083849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79397 ] 00:21:30.392 [2024-12-11 14:02:23.263593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.392 [2024-12-11 14:02:23.373644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.976 [2024-12-11 14:02:23.737676] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:30.976 [2024-12-11 14:02:23.737761] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:30.976 [2024-12-11 14:02:23.900327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.900375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:30.976 [2024-12-11 14:02:23.900391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:30.976 [2024-12-11 14:02:23.900402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.903522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.903708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:30.976 [2024-12-11 14:02:23.903731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.105 ms 00:21:30.976 [2024-12-11 14:02:23.903742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.903861] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:30.976 [2024-12-11 14:02:23.904901] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:30.976 [2024-12-11 14:02:23.904940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.904951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:30.976 [2024-12-11 14:02:23.904962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:21:30.976 [2024-12-11 14:02:23.904972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.906439] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:30.976 [2024-12-11 14:02:23.925491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.925638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:30.976 [2024-12-11 14:02:23.925660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.084 ms 00:21:30.976 [2024-12-11 14:02:23.925671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.925770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.925784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:30.976 [2024-12-11 14:02:23.925795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:30.976 [2024-12-11 14:02:23.925805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.932457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.932594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:30.976 [2024-12-11 14:02:23.932615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.600 ms 00:21:30.976 [2024-12-11 14:02:23.932626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.932730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.932744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:30.976 [2024-12-11 14:02:23.932756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:30.976 [2024-12-11 14:02:23.932767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.932798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.932809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:30.976 [2024-12-11 14:02:23.932819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:30.976 [2024-12-11 14:02:23.932840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.932863] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:30.976 [2024-12-11 14:02:23.937586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.937617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:30.976 [2024-12-11 14:02:23.937629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:21:30.976 [2024-12-11 14:02:23.937639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.937708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.937721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:30.976 [2024-12-11 14:02:23.937733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:30.976 [2024-12-11 14:02:23.937743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.937770] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:30.976 [2024-12-11 14:02:23.937792] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:30.976 [2024-12-11 14:02:23.937840] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:30.976 [2024-12-11 14:02:23.937868] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:30.976 [2024-12-11 14:02:23.937956] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:30.976 [2024-12-11 14:02:23.937969] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:30.976 [2024-12-11 14:02:23.937982] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:30.976 [2024-12-11 14:02:23.938000] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:30.976 [2024-12-11 14:02:23.938012] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:30.976 [2024-12-11 14:02:23.938024] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:30.976 [2024-12-11 14:02:23.938034] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:30.976 [2024-12-11 14:02:23.938044] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:30.976 [2024-12-11 14:02:23.938055] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:30.976 [2024-12-11 14:02:23.938065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.938083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:30.976 [2024-12-11 14:02:23.938093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:21:30.976 [2024-12-11 14:02:23.938103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.938179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.976 [2024-12-11 14:02:23.938194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:30.976 [2024-12-11 14:02:23.938204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:30.976 [2024-12-11 14:02:23.938213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.976 [2024-12-11 14:02:23.938299] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:30.976 [2024-12-11 14:02:23.938313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:30.976 [2024-12-11 14:02:23.938323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:30.976 [2024-12-11 14:02:23.938333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.976 [2024-12-11 14:02:23.938344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:30.976 [2024-12-11 14:02:23.938353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:30.976 [2024-12-11 14:02:23.938362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:30.976 [2024-12-11 14:02:23.938372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:30.976 [2024-12-11 14:02:23.938381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:30.976 [2024-12-11 14:02:23.938391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:30.976 [2024-12-11 14:02:23.938401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:30.976 [2024-12-11 14:02:23.938420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:30.976 [2024-12-11 14:02:23.938430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:30.976 [2024-12-11 14:02:23.938440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:30.977 [2024-12-11 14:02:23.938450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:30.977 [2024-12-11 14:02:23.938459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.977 [2024-12-11 14:02:23.938468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:30.977 [2024-12-11 14:02:23.938477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:30.977 [2024-12-11 14:02:23.938486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.977 [2024-12-11 14:02:23.938496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:30.977 [2024-12-11 14:02:23.938506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:30.977 [2024-12-11 14:02:23.938515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.977 [2024-12-11 14:02:23.938524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:30.977 [2024-12-11 14:02:23.938533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:30.977 [2024-12-11 14:02:23.938542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.977 [2024-12-11 14:02:23.938551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:30.977 [2024-12-11 14:02:23.938561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:30.977 [2024-12-11 14:02:23.938570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.977 [2024-12-11 14:02:23.938579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:30.977 [2024-12-11 14:02:23.938588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:30.977 [2024-12-11 14:02:23.938597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.977 [2024-12-11 14:02:23.938607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:30.977 [2024-12-11 14:02:23.938616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:30.977 [2024-12-11 14:02:23.938625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:30.977 [2024-12-11 14:02:23.938633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:30.977 [2024-12-11 14:02:23.938643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:30.977 [2024-12-11 14:02:23.938651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:30.977 [2024-12-11 14:02:23.938660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:30.977 [2024-12-11 14:02:23.938669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:30.977 [2024-12-11 14:02:23.938678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.977 [2024-12-11 14:02:23.938687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:30.977 [2024-12-11 14:02:23.938697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:30.977 [2024-12-11 14:02:23.938706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.977 [2024-12-11 14:02:23.938714] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:30.977 [2024-12-11 14:02:23.938724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:30.977 [2024-12-11 14:02:23.938737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:30.977 [2024-12-11 14:02:23.938747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.977 [2024-12-11 14:02:23.938756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:30.977 [2024-12-11 14:02:23.938766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:30.977 [2024-12-11 14:02:23.938775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:30.977 [2024-12-11 14:02:23.938785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:30.977 [2024-12-11 14:02:23.938794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:30.977 [2024-12-11 14:02:23.938803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:30.977 [2024-12-11 14:02:23.938814] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:30.977 [2024-12-11 14:02:23.938842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:30.977 [2024-12-11 14:02:23.938854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:30.977 [2024-12-11 14:02:23.938864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:30.977 [2024-12-11 14:02:23.938875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:30.977 [2024-12-11 14:02:23.938885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:30.977 [2024-12-11 14:02:23.938896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:30.977 [2024-12-11 14:02:23.938906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:30.977 [2024-12-11 14:02:23.938916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:30.977 [2024-12-11 14:02:23.938926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:30.977 [2024-12-11 14:02:23.938936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:30.977 [2024-12-11 14:02:23.938947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:30.977 [2024-12-11 14:02:23.938958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:30.977 [2024-12-11 14:02:23.938968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:30.977 [2024-12-11 14:02:23.938978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:30.977 [2024-12-11 14:02:23.938987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:30.977 [2024-12-11 14:02:23.938998] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:30.977 [2024-12-11 14:02:23.939008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:30.977 [2024-12-11 14:02:23.939019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:30.977 [2024-12-11 14:02:23.939030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:30.977 [2024-12-11 14:02:23.939043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:30.977 [2024-12-11 14:02:23.939054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:30.977 [2024-12-11 14:02:23.939065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.977 [2024-12-11 14:02:23.939079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:30.977 [2024-12-11 14:02:23.939089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:21:30.977 [2024-12-11 14:02:23.939098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.977 [2024-12-11 14:02:23.977286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.977 [2024-12-11 14:02:23.977440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:30.977 [2024-12-11 14:02:23.977462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.187 ms 00:21:30.977 [2024-12-11 14:02:23.977473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.977 [2024-12-11 14:02:23.977597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.977 [2024-12-11 14:02:23.977610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:30.977 [2024-12-11 14:02:23.977621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:30.977 [2024-12-11 14:02:23.977631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.046652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.046687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.237 [2024-12-11 14:02:24.046704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.110 ms 00:21:31.237 [2024-12-11 14:02:24.046715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.046800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.046814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.237 [2024-12-11 14:02:24.046840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:31.237 [2024-12-11 14:02:24.046851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.047286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.047300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.237 [2024-12-11 14:02:24.047311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:21:31.237 [2024-12-11 14:02:24.047325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.047440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.047459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.237 [2024-12-11 14:02:24.047469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:31.237 [2024-12-11 14:02:24.047479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.067063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.067233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.237 [2024-12-11 14:02:24.067254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.592 ms 00:21:31.237 [2024-12-11 14:02:24.067265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.086545] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:31.237 [2024-12-11 14:02:24.086587] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:31.237 [2024-12-11 14:02:24.086602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.086613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:31.237 [2024-12-11 14:02:24.086624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.259 ms 00:21:31.237 [2024-12-11 14:02:24.086633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.116365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.116407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:31.237 [2024-12-11 14:02:24.116422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.701 ms 00:21:31.237 [2024-12-11 14:02:24.116433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.134784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.134835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:31.237 [2024-12-11 14:02:24.134849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.298 ms 00:21:31.237 [2024-12-11 14:02:24.134859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.152832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.152869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:31.237 [2024-12-11 14:02:24.152884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.927 ms 00:21:31.237 [2024-12-11 14:02:24.152894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.153677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.153700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:31.237 [2024-12-11 14:02:24.153712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:21:31.237 [2024-12-11 14:02:24.153722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.239244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.239296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:31.237 [2024-12-11 14:02:24.239313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.632 ms 00:21:31.237 [2024-12-11 14:02:24.239324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.250333] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:31.237 [2024-12-11 14:02:24.266646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.266693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:31.237 [2024-12-11 14:02:24.266708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.252 ms 00:21:31.237 [2024-12-11 14:02:24.266720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.266880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.266896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:31.237 [2024-12-11 14:02:24.266908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:31.237 [2024-12-11 14:02:24.266918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.266972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.266985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:31.237 [2024-12-11 14:02:24.266996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:31.237 [2024-12-11 14:02:24.267006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.267040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.267058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:31.237 [2024-12-11 14:02:24.267069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:31.237 [2024-12-11 14:02:24.267079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.237 [2024-12-11 14:02:24.267115] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:31.237 [2024-12-11 14:02:24.267143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.237 [2024-12-11 14:02:24.267154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:31.237 [2024-12-11 14:02:24.267165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:31.237 [2024-12-11 14:02:24.267175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.497 [2024-12-11 14:02:24.304041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.497 [2024-12-11 14:02:24.304219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:31.497 [2024-12-11 14:02:24.304242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.903 ms 00:21:31.497 [2024-12-11 14:02:24.304253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.497 [2024-12-11 14:02:24.304419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.497 [2024-12-11 14:02:24.304434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:31.497 [2024-12-11 14:02:24.304447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:31.497 [2024-12-11 14:02:24.304457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.497 [2024-12-11 14:02:24.305364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:31.497 [2024-12-11 14:02:24.309484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.400 ms, result 0 00:21:31.497 [2024-12-11 14:02:24.310257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.497 [2024-12-11 14:02:24.328702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:32.433  [2024-12-11T14:02:26.416Z] Copying: 25/256 [MB] (25 MBps) [2024-12-11T14:02:27.353Z] Copying: 48/256 [MB] (23 MBps) [2024-12-11T14:02:28.731Z] Copying: 72/256 [MB] (23 MBps) [2024-12-11T14:02:29.667Z] Copying: 96/256 [MB] (23 MBps) [2024-12-11T14:02:30.606Z] Copying: 119/256 [MB] (23 MBps) [2024-12-11T14:02:31.542Z] Copying: 142/256 [MB] (23 MBps) [2024-12-11T14:02:32.553Z] Copying: 166/256 [MB] (23 MBps) [2024-12-11T14:02:33.488Z] Copying: 189/256 [MB] (23 MBps) [2024-12-11T14:02:34.424Z] Copying: 213/256 [MB] (23 MBps) [2024-12-11T14:02:35.361Z] Copying: 236/256 [MB] (23 MBps) [2024-12-11T14:02:35.361Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-11 14:02:35.156615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:42.314 [2024-12-11 14:02:35.171483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.171636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:42.314 [2024-12-11 14:02:35.171660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:42.314 [2024-12-11 14:02:35.171671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.314 [2024-12-11 14:02:35.171710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:42.314 [2024-12-11 14:02:35.175924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.175955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:42.314 [2024-12-11 14:02:35.175983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.203 ms 00:21:42.314 [2024-12-11 14:02:35.175994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.314 [2024-12-11 14:02:35.177910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.178032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:42.314 [2024-12-11 14:02:35.178052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.894 ms 00:21:42.314 [2024-12-11 14:02:35.178062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.314 [2024-12-11 14:02:35.184836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.184886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:42.314 [2024-12-11 14:02:35.184898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.751 ms 00:21:42.314 [2024-12-11 14:02:35.184908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.314 [2024-12-11 14:02:35.190532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.190566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:42.314 [2024-12-11 14:02:35.190578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.596 ms 00:21:42.314 [2024-12-11 14:02:35.190588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.314 [2024-12-11 14:02:35.227342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.227490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:42.314 [2024-12-11 14:02:35.227510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.777 ms 00:21:42.314 [2024-12-11 14:02:35.227521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.314 [2024-12-11 14:02:35.249023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.249065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:42.314 [2024-12-11 14:02:35.249082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.466 ms 00:21:42.314 [2024-12-11 14:02:35.249108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.314 [2024-12-11 14:02:35.249265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.249279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:42.314 [2024-12-11 14:02:35.249290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:42.314 [2024-12-11 14:02:35.249311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.314 [2024-12-11 14:02:35.286415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.286558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:42.314 [2024-12-11 14:02:35.286635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.146 ms 00:21:42.314 [2024-12-11 14:02:35.286669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.314 [2024-12-11 14:02:35.323088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.323229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:42.314 [2024-12-11 14:02:35.323302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.406 ms 00:21:42.314 [2024-12-11 14:02:35.323336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.314 [2024-12-11 14:02:35.359210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.314 [2024-12-11 14:02:35.359346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:42.314 [2024-12-11 14:02:35.359417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.854 ms 00:21:42.314 [2024-12-11 14:02:35.359451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.574 [2024-12-11 14:02:35.395990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.574 [2024-12-11 14:02:35.396140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:42.574 [2024-12-11 14:02:35.396244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.486 ms 00:21:42.574 [2024-12-11 14:02:35.396260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.574 [2024-12-11 14:02:35.396361] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:42.574 [2024-12-11 14:02:35.396379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:42.574 [2024-12-11 14:02:35.396697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.396998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:42.575 [2024-12-11 14:02:35.397464] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:42.575 [2024-12-11 14:02:35.397475] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1fac6d1-37a7-4b75-a43a-f4195852c0c7 00:21:42.575 [2024-12-11 14:02:35.397486] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:42.575 [2024-12-11 14:02:35.397495] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:42.575 [2024-12-11 14:02:35.397504] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:42.575 [2024-12-11 14:02:35.397514] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:42.575 [2024-12-11 14:02:35.397523] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:42.575 [2024-12-11 14:02:35.397533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:42.575 [2024-12-11 14:02:35.397542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:42.575 [2024-12-11 14:02:35.397552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:42.575 [2024-12-11 14:02:35.397561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:42.575 [2024-12-11 14:02:35.397570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.575 [2024-12-11 14:02:35.397585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:42.575 [2024-12-11 14:02:35.397595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:21:42.575 [2024-12-11 14:02:35.397605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.575 [2024-12-11 14:02:35.417313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.575 [2024-12-11 14:02:35.417347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:42.575 [2024-12-11 14:02:35.417360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.720 ms 00:21:42.576 [2024-12-11 14:02:35.417386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.576 [2024-12-11 14:02:35.418024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.576 [2024-12-11 14:02:35.418039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:42.576 [2024-12-11 14:02:35.418051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:21:42.576 [2024-12-11 14:02:35.418061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.576 [2024-12-11 14:02:35.474629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.576 [2024-12-11 14:02:35.474771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:42.576 [2024-12-11 14:02:35.474791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.576 [2024-12-11 14:02:35.474802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.576 [2024-12-11 14:02:35.474913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.576 [2024-12-11 14:02:35.474926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:42.576 [2024-12-11 14:02:35.474938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.576 [2024-12-11 14:02:35.474948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.576 [2024-12-11 14:02:35.474995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.576 [2024-12-11 14:02:35.475008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:42.576 [2024-12-11 14:02:35.475018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.576 [2024-12-11 14:02:35.475029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.576 [2024-12-11 14:02:35.475048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.576 [2024-12-11 14:02:35.475063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:42.576 [2024-12-11 14:02:35.475073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.576 [2024-12-11 14:02:35.475083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.576 [2024-12-11 14:02:35.601013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.576 [2024-12-11 14:02:35.601197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.576 [2024-12-11 14:02:35.601219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.576 [2024-12-11 14:02:35.601230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.835 [2024-12-11 14:02:35.703191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.835 [2024-12-11 14:02:35.703240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.835 [2024-12-11 14:02:35.703255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.835 [2024-12-11 14:02:35.703265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.835 [2024-12-11 14:02:35.703348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.835 [2024-12-11 14:02:35.703361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.835 [2024-12-11 14:02:35.703372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.835 [2024-12-11 14:02:35.703382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.835 [2024-12-11 14:02:35.703412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.835 [2024-12-11 14:02:35.703422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.835 [2024-12-11 14:02:35.703437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.835 [2024-12-11 14:02:35.703447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.835 [2024-12-11 14:02:35.703570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.835 [2024-12-11 14:02:35.703584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.835 [2024-12-11 14:02:35.703594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.835 [2024-12-11 14:02:35.703604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.835 [2024-12-11 14:02:35.703642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.835 [2024-12-11 14:02:35.703654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:42.835 [2024-12-11 14:02:35.703665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.835 [2024-12-11 14:02:35.703679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.835 [2024-12-11 14:02:35.703718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.835 [2024-12-11 14:02:35.703729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.835 [2024-12-11 14:02:35.703739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.835 [2024-12-11 14:02:35.703750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.835 [2024-12-11 14:02:35.703794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.835 [2024-12-11 14:02:35.703805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.835 [2024-12-11 14:02:35.703820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.835 [2024-12-11 14:02:35.703856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.835 [2024-12-11 14:02:35.703995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.382 ms, result 0 00:21:44.214 00:21:44.214 00:21:44.214 14:02:36 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79538 00:21:44.214 14:02:36 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:44.214 14:02:36 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79538 00:21:44.214 14:02:36 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79538 ']' 00:21:44.214 14:02:36 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.214 14:02:36 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.214 14:02:36 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.214 14:02:36 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.214 14:02:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:44.214 [2024-12-11 14:02:37.084546] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:44.214 [2024-12-11 14:02:37.084668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79538 ] 00:21:44.473 [2024-12-11 14:02:37.266331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.473 [2024-12-11 14:02:37.376994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.410 14:02:38 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.410 14:02:38 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:45.410 14:02:38 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:45.410 [2024-12-11 14:02:38.454195] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:45.410 [2024-12-11 14:02:38.454253] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:45.670 [2024-12-11 14:02:38.633379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.633425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:45.670 [2024-12-11 14:02:38.633443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:45.670 [2024-12-11 14:02:38.633454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.636697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.636733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:45.670 [2024-12-11 14:02:38.636747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:21:45.670 [2024-12-11 14:02:38.636757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.636904] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:45.670 [2024-12-11 14:02:38.637933] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:45.670 [2024-12-11 14:02:38.637963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.637975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:45.670 [2024-12-11 14:02:38.637987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:21:45.670 [2024-12-11 14:02:38.637997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.639753] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:45.670 [2024-12-11 14:02:38.658981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.659024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:45.670 [2024-12-11 14:02:38.659039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.268 ms 00:21:45.670 [2024-12-11 14:02:38.659052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.659148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.659164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:45.670 [2024-12-11 14:02:38.659176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:45.670 [2024-12-11 14:02:38.659188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.665854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.665888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:45.670 [2024-12-11 14:02:38.665901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.627 ms 00:21:45.670 [2024-12-11 14:02:38.665914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.666019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.666037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:45.670 [2024-12-11 14:02:38.666049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:45.670 [2024-12-11 14:02:38.666066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.666101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.666116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:45.670 [2024-12-11 14:02:38.666126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:45.670 [2024-12-11 14:02:38.666139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.666163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:45.670 [2024-12-11 14:02:38.670982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.671012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:45.670 [2024-12-11 14:02:38.671027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.829 ms 00:21:45.670 [2024-12-11 14:02:38.671037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.671111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.671124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:45.670 [2024-12-11 14:02:38.671138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:45.670 [2024-12-11 14:02:38.671151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.671175] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:45.670 [2024-12-11 14:02:38.671198] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:45.670 [2024-12-11 14:02:38.671246] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:45.670 [2024-12-11 14:02:38.671267] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:45.670 [2024-12-11 14:02:38.671358] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:45.670 [2024-12-11 14:02:38.671372] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:45.670 [2024-12-11 14:02:38.671390] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:45.670 [2024-12-11 14:02:38.671404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:45.670 [2024-12-11 14:02:38.671418] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:45.670 [2024-12-11 14:02:38.671429] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:45.670 [2024-12-11 14:02:38.671441] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:45.670 [2024-12-11 14:02:38.671451] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:45.670 [2024-12-11 14:02:38.671467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:45.670 [2024-12-11 14:02:38.671477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.671490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:45.670 [2024-12-11 14:02:38.671500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:21:45.670 [2024-12-11 14:02:38.671512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.671590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.670 [2024-12-11 14:02:38.671603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:45.670 [2024-12-11 14:02:38.671614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:45.670 [2024-12-11 14:02:38.671626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.670 [2024-12-11 14:02:38.671717] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:45.670 [2024-12-11 14:02:38.671737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:45.670 [2024-12-11 14:02:38.671748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:45.670 [2024-12-11 14:02:38.671761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.670 [2024-12-11 14:02:38.671772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:45.670 [2024-12-11 14:02:38.671787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:45.670 [2024-12-11 14:02:38.671797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:45.670 [2024-12-11 14:02:38.671812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:45.670 [2024-12-11 14:02:38.671821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:45.671 [2024-12-11 14:02:38.671849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:45.671 [2024-12-11 14:02:38.671859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:45.671 [2024-12-11 14:02:38.671871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:45.671 [2024-12-11 14:02:38.671880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:45.671 [2024-12-11 14:02:38.671892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:45.671 [2024-12-11 14:02:38.671902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:45.671 [2024-12-11 14:02:38.671914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.671 [2024-12-11 14:02:38.671923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:45.671 [2024-12-11 14:02:38.671935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:45.671 [2024-12-11 14:02:38.671954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.671 [2024-12-11 14:02:38.671966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:45.671 [2024-12-11 14:02:38.671976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:45.671 [2024-12-11 14:02:38.671988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.671 [2024-12-11 14:02:38.671997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:45.671 [2024-12-11 14:02:38.672012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:45.671 [2024-12-11 14:02:38.672021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.671 [2024-12-11 14:02:38.672033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:45.671 [2024-12-11 14:02:38.672042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:45.671 [2024-12-11 14:02:38.672054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.671 [2024-12-11 14:02:38.672063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:45.671 [2024-12-11 14:02:38.672076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:45.671 [2024-12-11 14:02:38.672086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.671 [2024-12-11 14:02:38.672098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:45.671 [2024-12-11 14:02:38.672107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:45.671 [2024-12-11 14:02:38.672118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:45.671 [2024-12-11 14:02:38.672128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:45.671 [2024-12-11 14:02:38.672142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:45.671 [2024-12-11 14:02:38.672152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:45.671 [2024-12-11 14:02:38.672172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:45.671 [2024-12-11 14:02:38.672183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:45.671 [2024-12-11 14:02:38.672201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.671 [2024-12-11 14:02:38.672211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:45.671 [2024-12-11 14:02:38.672225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:45.671 [2024-12-11 14:02:38.672235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.671 [2024-12-11 14:02:38.672249] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:45.671 [2024-12-11 14:02:38.672264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:45.671 [2024-12-11 14:02:38.672278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:45.671 [2024-12-11 14:02:38.672289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.671 [2024-12-11 14:02:38.672304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:45.671 [2024-12-11 14:02:38.672314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:45.671 [2024-12-11 14:02:38.672329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:45.671 [2024-12-11 14:02:38.672339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:45.671 [2024-12-11 14:02:38.672353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:45.671 [2024-12-11 14:02:38.672363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:45.671 [2024-12-11 14:02:38.672379] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:45.671 [2024-12-11 14:02:38.672392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:45.671 [2024-12-11 14:02:38.672412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:45.671 [2024-12-11 14:02:38.672423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:45.671 [2024-12-11 14:02:38.672436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:45.671 [2024-12-11 14:02:38.672447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:45.671 [2024-12-11 14:02:38.672460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:45.671 [2024-12-11 14:02:38.672470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:45.671 [2024-12-11 14:02:38.672483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:45.671 [2024-12-11 14:02:38.672493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:45.671 [2024-12-11 14:02:38.672506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:45.671 [2024-12-11 14:02:38.672517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:45.671 [2024-12-11 14:02:38.672530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:45.671 [2024-12-11 14:02:38.672540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:45.671 [2024-12-11 14:02:38.672554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:45.671 [2024-12-11 14:02:38.672564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:45.671 [2024-12-11 14:02:38.672578] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:45.671 [2024-12-11 14:02:38.672589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:45.671 [2024-12-11 14:02:38.672605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:45.671 [2024-12-11 14:02:38.672616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:45.671 [2024-12-11 14:02:38.672628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:45.671 [2024-12-11 14:02:38.672638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:45.671 [2024-12-11 14:02:38.672652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.671 [2024-12-11 14:02:38.672663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:45.671 [2024-12-11 14:02:38.672676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:21:45.671 [2024-12-11 14:02:38.672688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.671 [2024-12-11 14:02:38.712314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.671 [2024-12-11 14:02:38.712348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:45.671 [2024-12-11 14:02:38.712367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.620 ms 00:21:45.671 [2024-12-11 14:02:38.712384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.671 [2024-12-11 14:02:38.712505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.671 [2024-12-11 14:02:38.712518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:45.671 [2024-12-11 14:02:38.712534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:45.671 [2024-12-11 14:02:38.712544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.930 [2024-12-11 14:02:38.761465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.930 [2024-12-11 14:02:38.761504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:45.930 [2024-12-11 14:02:38.761521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.973 ms 00:21:45.930 [2024-12-11 14:02:38.761532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.930 [2024-12-11 14:02:38.761622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.930 [2024-12-11 14:02:38.761635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:45.930 [2024-12-11 14:02:38.761649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:45.930 [2024-12-11 14:02:38.761659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.930 [2024-12-11 14:02:38.762108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.930 [2024-12-11 14:02:38.762126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:45.930 [2024-12-11 14:02:38.762144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:21:45.930 [2024-12-11 14:02:38.762153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.930 [2024-12-11 14:02:38.762274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.930 [2024-12-11 14:02:38.762287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:45.930 [2024-12-11 14:02:38.762301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:45.930 [2024-12-11 14:02:38.762312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.930 [2024-12-11 14:02:38.784629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.930 [2024-12-11 14:02:38.784661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:45.930 [2024-12-11 14:02:38.784694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.323 ms 00:21:45.930 [2024-12-11 14:02:38.784705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.930 [2024-12-11 14:02:38.833617] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:45.930 [2024-12-11 14:02:38.833659] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:45.930 [2024-12-11 14:02:38.833684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.930 [2024-12-11 14:02:38.833697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:45.930 [2024-12-11 14:02:38.833717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.940 ms 00:21:45.930 [2024-12-11 14:02:38.833742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.930 [2024-12-11 14:02:38.864132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.930 [2024-12-11 14:02:38.864168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:45.930 [2024-12-11 14:02:38.864188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.314 ms 00:21:45.930 [2024-12-11 14:02:38.864199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.930 [2024-12-11 14:02:38.882697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.930 [2024-12-11 14:02:38.882730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:45.930 [2024-12-11 14:02:38.882755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.417 ms 00:21:45.930 [2024-12-11 14:02:38.882765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.930 [2024-12-11 14:02:38.900618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.930 [2024-12-11 14:02:38.900652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:45.930 [2024-12-11 14:02:38.900670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.781 ms 00:21:45.930 [2024-12-11 14:02:38.900681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.930 [2024-12-11 14:02:38.901427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.930 [2024-12-11 14:02:38.901452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:45.930 [2024-12-11 14:02:38.901469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:21:45.930 [2024-12-11 14:02:38.901480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.190 [2024-12-11 14:02:38.987884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.190 [2024-12-11 14:02:38.987941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:46.190 [2024-12-11 14:02:38.987963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.506 ms 00:21:46.190 [2024-12-11 14:02:38.987974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.190 [2024-12-11 14:02:38.998716] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:46.190 [2024-12-11 14:02:39.014371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.190 [2024-12-11 14:02:39.014431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:46.190 [2024-12-11 14:02:39.014453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.316 ms 00:21:46.190 [2024-12-11 14:02:39.014469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.190 [2024-12-11 14:02:39.014564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.190 [2024-12-11 14:02:39.014583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:46.190 [2024-12-11 14:02:39.014595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:46.190 [2024-12-11 14:02:39.014610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.190 [2024-12-11 14:02:39.014667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.190 [2024-12-11 14:02:39.014684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:46.190 [2024-12-11 14:02:39.014695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:46.190 [2024-12-11 14:02:39.014716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.190 [2024-12-11 14:02:39.014741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.190 [2024-12-11 14:02:39.014758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:46.190 [2024-12-11 14:02:39.014768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:46.190 [2024-12-11 14:02:39.014783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.190 [2024-12-11 14:02:39.014854] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:46.190 [2024-12-11 14:02:39.014878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.190 [2024-12-11 14:02:39.014894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:46.190 [2024-12-11 14:02:39.014911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:46.190 [2024-12-11 14:02:39.014922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.190 [2024-12-11 14:02:39.051179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.190 [2024-12-11 14:02:39.051218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:46.190 [2024-12-11 14:02:39.051238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.275 ms 00:21:46.190 [2024-12-11 14:02:39.051249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.190 [2024-12-11 14:02:39.051361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.190 [2024-12-11 14:02:39.051375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:46.190 [2024-12-11 14:02:39.051392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:46.190 [2024-12-11 14:02:39.051407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.190 [2024-12-11 14:02:39.052331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:46.190 [2024-12-11 14:02:39.056372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 419.345 ms, result 0 00:21:46.190 [2024-12-11 14:02:39.057608] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:46.190 Some configs were skipped because the RPC state that can call them passed over. 00:21:46.190 14:02:39 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:46.449 [2024-12-11 14:02:39.304984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.449 [2024-12-11 14:02:39.305037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:46.449 [2024-12-11 14:02:39.305054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:21:46.449 [2024-12-11 14:02:39.305067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.449 [2024-12-11 14:02:39.305103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.659 ms, result 0 00:21:46.449 true 00:21:46.449 14:02:39 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:46.708 [2024-12-11 14:02:39.520480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.708 [2024-12-11 14:02:39.520523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:46.708 [2024-12-11 14:02:39.520544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:21:46.708 [2024-12-11 14:02:39.520555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.708 [2024-12-11 14:02:39.520602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.261 ms, result 0 00:21:46.708 true 00:21:46.708 14:02:39 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79538 00:21:46.708 14:02:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79538 ']' 00:21:46.708 14:02:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79538 00:21:46.708 14:02:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:46.708 14:02:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.708 14:02:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79538 00:21:46.708 14:02:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.708 14:02:39 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.708 killing process with pid 79538 00:21:46.708 14:02:39 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79538' 00:21:46.708 14:02:39 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79538 00:21:46.708 14:02:39 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79538 00:21:47.696 [2024-12-11 14:02:40.706311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.696 [2024-12-11 14:02:40.706363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:47.696 [2024-12-11 14:02:40.706380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:47.696 [2024-12-11 14:02:40.706393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.696 [2024-12-11 14:02:40.706420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:47.696 [2024-12-11 14:02:40.710614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.696 [2024-12-11 14:02:40.710643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:47.696 [2024-12-11 14:02:40.710660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.179 ms 00:21:47.696 [2024-12-11 14:02:40.710670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.696 [2024-12-11 14:02:40.710936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.696 [2024-12-11 14:02:40.710954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:47.696 [2024-12-11 14:02:40.710969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:21:47.696 [2024-12-11 14:02:40.710979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.696 [2024-12-11 14:02:40.714377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.696 [2024-12-11 14:02:40.714412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:47.696 [2024-12-11 14:02:40.714430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:21:47.696 [2024-12-11 14:02:40.714440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.696 [2024-12-11 14:02:40.720051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.696 [2024-12-11 14:02:40.720083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:47.696 [2024-12-11 14:02:40.720097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.578 ms 00:21:47.696 [2024-12-11 14:02:40.720118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.696 [2024-12-11 14:02:40.735320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.696 [2024-12-11 14:02:40.735363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:47.696 [2024-12-11 14:02:40.735382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.165 ms 00:21:47.696 [2024-12-11 14:02:40.735392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.956 [2024-12-11 14:02:40.745508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.956 [2024-12-11 14:02:40.745544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:47.956 [2024-12-11 14:02:40.745560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.060 ms 00:21:47.956 [2024-12-11 14:02:40.745570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.956 [2024-12-11 14:02:40.745718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.956 [2024-12-11 14:02:40.745731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:47.956 [2024-12-11 14:02:40.745745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:47.956 [2024-12-11 14:02:40.745755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.956 [2024-12-11 14:02:40.761263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.956 [2024-12-11 14:02:40.761294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:47.956 [2024-12-11 14:02:40.761329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.503 ms 00:21:47.956 [2024-12-11 14:02:40.761339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.956 [2024-12-11 14:02:40.776388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.956 [2024-12-11 14:02:40.776417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:47.956 [2024-12-11 14:02:40.776442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.014 ms 00:21:47.956 [2024-12-11 14:02:40.776452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.956 [2024-12-11 14:02:40.791107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.956 [2024-12-11 14:02:40.791139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:47.956 [2024-12-11 14:02:40.791158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.621 ms 00:21:47.956 [2024-12-11 14:02:40.791168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.956 [2024-12-11 14:02:40.805585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.956 [2024-12-11 14:02:40.805616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:47.956 [2024-12-11 14:02:40.805634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.345 ms 00:21:47.956 [2024-12-11 14:02:40.805644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.956 [2024-12-11 14:02:40.805714] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:47.956 [2024-12-11 14:02:40.805731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.805989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:47.956 [2024-12-11 14:02:40.806425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.806987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.807002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.807014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.807026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.807037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.807050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:47.957 [2024-12-11 14:02:40.807078] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:47.957 [2024-12-11 14:02:40.807097] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1fac6d1-37a7-4b75-a43a-f4195852c0c7 00:21:47.957 [2024-12-11 14:02:40.807114] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:47.957 [2024-12-11 14:02:40.807129] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:47.957 [2024-12-11 14:02:40.807139] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:47.957 [2024-12-11 14:02:40.807155] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:47.957 [2024-12-11 14:02:40.807165] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:47.957 [2024-12-11 14:02:40.807179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:47.957 [2024-12-11 14:02:40.807189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:47.957 [2024-12-11 14:02:40.807203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:47.957 [2024-12-11 14:02:40.807212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:47.957 [2024-12-11 14:02:40.807228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.957 [2024-12-11 14:02:40.807238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:47.957 [2024-12-11 14:02:40.807254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:21:47.957 [2024-12-11 14:02:40.807263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.957 [2024-12-11 14:02:40.826838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.957 [2024-12-11 14:02:40.826867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:47.957 [2024-12-11 14:02:40.826890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.553 ms 00:21:47.957 [2024-12-11 14:02:40.826901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.957 [2024-12-11 14:02:40.827427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.957 [2024-12-11 14:02:40.827443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:47.957 [2024-12-11 14:02:40.827464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:21:47.957 [2024-12-11 14:02:40.827474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.957 [2024-12-11 14:02:40.897139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.957 [2024-12-11 14:02:40.897173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:47.957 [2024-12-11 14:02:40.897191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.957 [2024-12-11 14:02:40.897201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.957 [2024-12-11 14:02:40.897287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.957 [2024-12-11 14:02:40.897300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:47.957 [2024-12-11 14:02:40.897322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.957 [2024-12-11 14:02:40.897332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.957 [2024-12-11 14:02:40.897386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.957 [2024-12-11 14:02:40.897399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:47.957 [2024-12-11 14:02:40.897420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.957 [2024-12-11 14:02:40.897431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.957 [2024-12-11 14:02:40.897456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.957 [2024-12-11 14:02:40.897467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:47.957 [2024-12-11 14:02:40.897481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.957 [2024-12-11 14:02:40.897497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.216 [2024-12-11 14:02:41.022497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.216 [2024-12-11 14:02:41.022559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.216 [2024-12-11 14:02:41.022596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.216 [2024-12-11 14:02:41.022607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.216 [2024-12-11 14:02:41.124952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.216 [2024-12-11 14:02:41.124994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.216 [2024-12-11 14:02:41.125013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.216 [2024-12-11 14:02:41.125029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.216 [2024-12-11 14:02:41.125151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.216 [2024-12-11 14:02:41.125164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.216 [2024-12-11 14:02:41.125185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.216 [2024-12-11 14:02:41.125196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.216 [2024-12-11 14:02:41.125230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.216 [2024-12-11 14:02:41.125241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.216 [2024-12-11 14:02:41.125257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.216 [2024-12-11 14:02:41.125267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.216 [2024-12-11 14:02:41.125393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.216 [2024-12-11 14:02:41.125407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.216 [2024-12-11 14:02:41.125422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.216 [2024-12-11 14:02:41.125433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.216 [2024-12-11 14:02:41.125476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.216 [2024-12-11 14:02:41.125489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:48.216 [2024-12-11 14:02:41.125505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.216 [2024-12-11 14:02:41.125515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.216 [2024-12-11 14:02:41.125563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.216 [2024-12-11 14:02:41.125575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.216 [2024-12-11 14:02:41.125594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.216 [2024-12-11 14:02:41.125605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.216 [2024-12-11 14:02:41.125652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.216 [2024-12-11 14:02:41.125665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.216 [2024-12-11 14:02:41.125680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.216 [2024-12-11 14:02:41.125691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.216 [2024-12-11 14:02:41.125861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.170 ms, result 0 00:21:49.153 14:02:42 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:49.153 14:02:42 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:49.412 [2024-12-11 14:02:42.257465] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:49.412 [2024-12-11 14:02:42.257584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79607 ] 00:21:49.412 [2024-12-11 14:02:42.440242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.671 [2024-12-11 14:02:42.550700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.929 [2024-12-11 14:02:42.915009] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:49.929 [2024-12-11 14:02:42.915080] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:50.190 [2024-12-11 14:02:43.076722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.076771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:50.190 [2024-12-11 14:02:43.076787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:50.190 [2024-12-11 14:02:43.076798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.079814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.079868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:50.190 [2024-12-11 14:02:43.079881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:21:50.190 [2024-12-11 14:02:43.079890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.079986] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:50.190 [2024-12-11 14:02:43.080974] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:50.190 [2024-12-11 14:02:43.081007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.081018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:50.190 [2024-12-11 14:02:43.081028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:21:50.190 [2024-12-11 14:02:43.081038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.082497] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:50.190 [2024-12-11 14:02:43.100871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.100920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:50.190 [2024-12-11 14:02:43.100936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.404 ms 00:21:50.190 [2024-12-11 14:02:43.100947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.101048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.101062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:50.190 [2024-12-11 14:02:43.101074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:50.190 [2024-12-11 14:02:43.101084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.107889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.107918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:50.190 [2024-12-11 14:02:43.107929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.774 ms 00:21:50.190 [2024-12-11 14:02:43.107940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.108043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.108058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:50.190 [2024-12-11 14:02:43.108069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:50.190 [2024-12-11 14:02:43.108079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.108111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.108122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:50.190 [2024-12-11 14:02:43.108133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:50.190 [2024-12-11 14:02:43.108143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.108165] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:50.190 [2024-12-11 14:02:43.112972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.113004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:50.190 [2024-12-11 14:02:43.113016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.820 ms 00:21:50.190 [2024-12-11 14:02:43.113026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.113095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.113108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:50.190 [2024-12-11 14:02:43.113120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:50.190 [2024-12-11 14:02:43.113129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.113156] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:50.190 [2024-12-11 14:02:43.113179] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:50.190 [2024-12-11 14:02:43.113214] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:50.190 [2024-12-11 14:02:43.113231] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:50.190 [2024-12-11 14:02:43.113321] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:50.190 [2024-12-11 14:02:43.113335] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:50.190 [2024-12-11 14:02:43.113347] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:50.190 [2024-12-11 14:02:43.113363] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:50.190 [2024-12-11 14:02:43.113375] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:50.190 [2024-12-11 14:02:43.113386] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:50.190 [2024-12-11 14:02:43.113397] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:50.190 [2024-12-11 14:02:43.113407] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:50.190 [2024-12-11 14:02:43.113417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:50.190 [2024-12-11 14:02:43.113427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.113438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:50.190 [2024-12-11 14:02:43.113448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:21:50.190 [2024-12-11 14:02:43.113458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.113539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.190 [2024-12-11 14:02:43.113553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:50.190 [2024-12-11 14:02:43.113575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:50.190 [2024-12-11 14:02:43.113585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.190 [2024-12-11 14:02:43.113674] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:50.190 [2024-12-11 14:02:43.113686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:50.190 [2024-12-11 14:02:43.113697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:50.190 [2024-12-11 14:02:43.113724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.190 [2024-12-11 14:02:43.113735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:50.190 [2024-12-11 14:02:43.113744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:50.190 [2024-12-11 14:02:43.113754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:50.190 [2024-12-11 14:02:43.113764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:50.190 [2024-12-11 14:02:43.113774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:50.190 [2024-12-11 14:02:43.113783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:50.190 [2024-12-11 14:02:43.113793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:50.190 [2024-12-11 14:02:43.113814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:50.190 [2024-12-11 14:02:43.113824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:50.190 [2024-12-11 14:02:43.113834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:50.190 [2024-12-11 14:02:43.113843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:50.190 [2024-12-11 14:02:43.113869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.190 [2024-12-11 14:02:43.113879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:50.190 [2024-12-11 14:02:43.113888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:50.190 [2024-12-11 14:02:43.113898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.190 [2024-12-11 14:02:43.113907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:50.190 [2024-12-11 14:02:43.113928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:50.190 [2024-12-11 14:02:43.113937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.190 [2024-12-11 14:02:43.113947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:50.190 [2024-12-11 14:02:43.113956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:50.190 [2024-12-11 14:02:43.113965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.190 [2024-12-11 14:02:43.113974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:50.190 [2024-12-11 14:02:43.113983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:50.190 [2024-12-11 14:02:43.113992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.190 [2024-12-11 14:02:43.114017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:50.191 [2024-12-11 14:02:43.114026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:50.191 [2024-12-11 14:02:43.114035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.191 [2024-12-11 14:02:43.114044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:50.191 [2024-12-11 14:02:43.114054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:50.191 [2024-12-11 14:02:43.114063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:50.191 [2024-12-11 14:02:43.114072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:50.191 [2024-12-11 14:02:43.114090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:50.191 [2024-12-11 14:02:43.114099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:50.191 [2024-12-11 14:02:43.114109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:50.191 [2024-12-11 14:02:43.114118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:50.191 [2024-12-11 14:02:43.114127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.191 [2024-12-11 14:02:43.114136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:50.191 [2024-12-11 14:02:43.114145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:50.191 [2024-12-11 14:02:43.114156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.191 [2024-12-11 14:02:43.114165] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:50.191 [2024-12-11 14:02:43.114176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:50.191 [2024-12-11 14:02:43.114190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:50.191 [2024-12-11 14:02:43.114200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.191 [2024-12-11 14:02:43.114210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:50.191 [2024-12-11 14:02:43.114221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:50.191 [2024-12-11 14:02:43.114230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:50.191 [2024-12-11 14:02:43.114240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:50.191 [2024-12-11 14:02:43.114249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:50.191 [2024-12-11 14:02:43.114258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:50.191 [2024-12-11 14:02:43.114269] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:50.191 [2024-12-11 14:02:43.114281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:50.191 [2024-12-11 14:02:43.114293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:50.191 [2024-12-11 14:02:43.114303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:50.191 [2024-12-11 14:02:43.114314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:50.191 [2024-12-11 14:02:43.114324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:50.191 [2024-12-11 14:02:43.114335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:50.191 [2024-12-11 14:02:43.114346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:50.191 [2024-12-11 14:02:43.114356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:50.191 [2024-12-11 14:02:43.114366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:50.191 [2024-12-11 14:02:43.114376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:50.191 [2024-12-11 14:02:43.114388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:50.191 [2024-12-11 14:02:43.114398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:50.191 [2024-12-11 14:02:43.114408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:50.191 [2024-12-11 14:02:43.114418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:50.191 [2024-12-11 14:02:43.114429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:50.191 [2024-12-11 14:02:43.114439] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:50.191 [2024-12-11 14:02:43.114450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:50.191 [2024-12-11 14:02:43.114461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:50.191 [2024-12-11 14:02:43.114472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:50.191 [2024-12-11 14:02:43.114482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:50.191 [2024-12-11 14:02:43.114493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:50.191 [2024-12-11 14:02:43.114503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.191 [2024-12-11 14:02:43.114518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:50.191 [2024-12-11 14:02:43.114528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:21:50.191 [2024-12-11 14:02:43.114538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.191 [2024-12-11 14:02:43.154410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.191 [2024-12-11 14:02:43.154568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:50.191 [2024-12-11 14:02:43.154696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.877 ms 00:21:50.191 [2024-12-11 14:02:43.154736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.191 [2024-12-11 14:02:43.154905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.191 [2024-12-11 14:02:43.155014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:50.191 [2024-12-11 14:02:43.155055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:50.191 [2024-12-11 14:02:43.155087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.191 [2024-12-11 14:02:43.211203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.191 [2024-12-11 14:02:43.211350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:50.191 [2024-12-11 14:02:43.211440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.112 ms 00:21:50.191 [2024-12-11 14:02:43.211478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.191 [2024-12-11 14:02:43.211591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.191 [2024-12-11 14:02:43.211692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:50.191 [2024-12-11 14:02:43.211730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:50.191 [2024-12-11 14:02:43.211760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.191 [2024-12-11 14:02:43.212294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.191 [2024-12-11 14:02:43.212416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:50.191 [2024-12-11 14:02:43.212491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:21:50.191 [2024-12-11 14:02:43.212533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.191 [2024-12-11 14:02:43.212678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.191 [2024-12-11 14:02:43.212734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:50.191 [2024-12-11 14:02:43.212807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:50.191 [2024-12-11 14:02:43.212869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.191 [2024-12-11 14:02:43.231407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.191 [2024-12-11 14:02:43.231567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:50.191 [2024-12-11 14:02:43.231688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.519 ms 00:21:50.191 [2024-12-11 14:02:43.231727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.450 [2024-12-11 14:02:43.251281] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:50.450 [2024-12-11 14:02:43.251469] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:50.450 [2024-12-11 14:02:43.251637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.450 [2024-12-11 14:02:43.251672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:50.450 [2024-12-11 14:02:43.251703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.802 ms 00:21:50.450 [2024-12-11 14:02:43.251732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.450 [2024-12-11 14:02:43.281195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.450 [2024-12-11 14:02:43.281349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:50.450 [2024-12-11 14:02:43.281478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.316 ms 00:21:50.450 [2024-12-11 14:02:43.281515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.450 [2024-12-11 14:02:43.299517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.450 [2024-12-11 14:02:43.299664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:50.450 [2024-12-11 14:02:43.299805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.933 ms 00:21:50.450 [2024-12-11 14:02:43.299859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.450 [2024-12-11 14:02:43.317327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.450 [2024-12-11 14:02:43.317469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:50.450 [2024-12-11 14:02:43.317624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.399 ms 00:21:50.450 [2024-12-11 14:02:43.317659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.450 [2024-12-11 14:02:43.318468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.451 [2024-12-11 14:02:43.318598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:50.451 [2024-12-11 14:02:43.318671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:21:50.451 [2024-12-11 14:02:43.318706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.451 [2024-12-11 14:02:43.403263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.451 [2024-12-11 14:02:43.403316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:50.451 [2024-12-11 14:02:43.403332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.659 ms 00:21:50.451 [2024-12-11 14:02:43.403343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.451 [2024-12-11 14:02:43.414378] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:50.451 [2024-12-11 14:02:43.430385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.451 [2024-12-11 14:02:43.430433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:50.451 [2024-12-11 14:02:43.430448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.986 ms 00:21:50.451 [2024-12-11 14:02:43.430481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.451 [2024-12-11 14:02:43.430613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.451 [2024-12-11 14:02:43.430627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:50.451 [2024-12-11 14:02:43.430638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:50.451 [2024-12-11 14:02:43.430648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.451 [2024-12-11 14:02:43.430701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.451 [2024-12-11 14:02:43.430713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:50.451 [2024-12-11 14:02:43.430723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:50.451 [2024-12-11 14:02:43.430736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.451 [2024-12-11 14:02:43.430769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.451 [2024-12-11 14:02:43.430782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:50.451 [2024-12-11 14:02:43.430792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:50.451 [2024-12-11 14:02:43.430802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.451 [2024-12-11 14:02:43.430841] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:50.451 [2024-12-11 14:02:43.430876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.451 [2024-12-11 14:02:43.430903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:50.451 [2024-12-11 14:02:43.430913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:50.451 [2024-12-11 14:02:43.430923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.451 [2024-12-11 14:02:43.466633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.451 [2024-12-11 14:02:43.466673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:50.451 [2024-12-11 14:02:43.466687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.742 ms 00:21:50.451 [2024-12-11 14:02:43.466697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.451 [2024-12-11 14:02:43.466805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.451 [2024-12-11 14:02:43.466818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:50.451 [2024-12-11 14:02:43.466847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:50.451 [2024-12-11 14:02:43.466857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.451 [2024-12-11 14:02:43.467719] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:50.451 [2024-12-11 14:02:43.472011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.357 ms, result 0 00:21:50.451 [2024-12-11 14:02:43.473004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:50.451 [2024-12-11 14:02:43.491309] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:51.829  [2024-12-11T14:02:45.812Z] Copying: 28/256 [MB] (28 MBps) [2024-12-11T14:02:46.747Z] Copying: 53/256 [MB] (24 MBps) [2024-12-11T14:02:47.684Z] Copying: 77/256 [MB] (24 MBps) [2024-12-11T14:02:48.653Z] Copying: 102/256 [MB] (24 MBps) [2024-12-11T14:02:49.594Z] Copying: 126/256 [MB] (24 MBps) [2024-12-11T14:02:50.530Z] Copying: 151/256 [MB] (24 MBps) [2024-12-11T14:02:51.908Z] Copying: 175/256 [MB] (24 MBps) [2024-12-11T14:02:52.846Z] Copying: 199/256 [MB] (24 MBps) [2024-12-11T14:02:53.782Z] Copying: 223/256 [MB] (24 MBps) [2024-12-11T14:02:54.041Z] Copying: 247/256 [MB] (23 MBps) [2024-12-11T14:02:54.041Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-11 14:02:53.828317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:00.994 [2024-12-11 14:02:53.842502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:53.842542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.994 [2024-12-11 14:02:53.842563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:00.994 [2024-12-11 14:02:53.842573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.994 [2024-12-11 14:02:53.842594] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:00.994 [2024-12-11 14:02:53.846650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:53.846679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.994 [2024-12-11 14:02:53.846691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:22:00.994 [2024-12-11 14:02:53.846716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.994 [2024-12-11 14:02:53.846956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:53.846974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.994 [2024-12-11 14:02:53.846985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:22:00.994 [2024-12-11 14:02:53.846994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.994 [2024-12-11 14:02:53.849808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:53.849837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.994 [2024-12-11 14:02:53.849848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:22:00.994 [2024-12-11 14:02:53.849873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.994 [2024-12-11 14:02:53.855288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:53.855321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.994 [2024-12-11 14:02:53.855332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.406 ms 00:22:00.994 [2024-12-11 14:02:53.855341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.994 [2024-12-11 14:02:53.890077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:53.890120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.994 [2024-12-11 14:02:53.890133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.718 ms 00:22:00.994 [2024-12-11 14:02:53.890158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.994 [2024-12-11 14:02:53.910477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:53.910514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.994 [2024-12-11 14:02:53.910537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.297 ms 00:22:00.994 [2024-12-11 14:02:53.910547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.994 [2024-12-11 14:02:53.910680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:53.910693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.994 [2024-12-11 14:02:53.910718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:22:00.994 [2024-12-11 14:02:53.910727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.994 [2024-12-11 14:02:53.945524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:53.945559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:00.994 [2024-12-11 14:02:53.945571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.836 ms 00:22:00.994 [2024-12-11 14:02:53.945596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.994 [2024-12-11 14:02:53.980799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:53.980842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:00.994 [2024-12-11 14:02:53.980855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.206 ms 00:22:00.994 [2024-12-11 14:02:53.980865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.994 [2024-12-11 14:02:54.014847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.994 [2024-12-11 14:02:54.014888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.994 [2024-12-11 14:02:54.014901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.967 ms 00:22:00.994 [2024-12-11 14:02:54.014927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.254 [2024-12-11 14:02:54.049176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.254 [2024-12-11 14:02:54.049211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:01.254 [2024-12-11 14:02:54.049222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.208 ms 00:22:01.254 [2024-12-11 14:02:54.049248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.254 [2024-12-11 14:02:54.049301] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:01.254 [2024-12-11 14:02:54.049317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:01.254 [2024-12-11 14:02:54.049543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.049993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:01.255 [2024-12-11 14:02:54.050443] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:01.255 [2024-12-11 14:02:54.050453] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1fac6d1-37a7-4b75-a43a-f4195852c0c7 00:22:01.255 [2024-12-11 14:02:54.050464] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:01.255 [2024-12-11 14:02:54.050473] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:01.255 [2024-12-11 14:02:54.050483] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:01.255 [2024-12-11 14:02:54.050494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:01.255 [2024-12-11 14:02:54.050504] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:01.255 [2024-12-11 14:02:54.050514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:01.255 [2024-12-11 14:02:54.050531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:01.255 [2024-12-11 14:02:54.050540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:01.255 [2024-12-11 14:02:54.050548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:01.255 [2024-12-11 14:02:54.050558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.255 [2024-12-11 14:02:54.050568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:01.255 [2024-12-11 14:02:54.050579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:22:01.255 [2024-12-11 14:02:54.050588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.255 [2024-12-11 14:02:54.069978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.255 [2024-12-11 14:02:54.070012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:01.255 [2024-12-11 14:02:54.070023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.401 ms 00:22:01.255 [2024-12-11 14:02:54.070049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.256 [2024-12-11 14:02:54.070667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.256 [2024-12-11 14:02:54.070682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:01.256 [2024-12-11 14:02:54.070693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:22:01.256 [2024-12-11 14:02:54.070702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.256 [2024-12-11 14:02:54.123558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.256 [2024-12-11 14:02:54.123605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:01.256 [2024-12-11 14:02:54.123618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.256 [2024-12-11 14:02:54.123633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.256 [2024-12-11 14:02:54.123715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.256 [2024-12-11 14:02:54.123726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:01.256 [2024-12-11 14:02:54.123737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.256 [2024-12-11 14:02:54.123746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.256 [2024-12-11 14:02:54.123795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.256 [2024-12-11 14:02:54.123807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:01.256 [2024-12-11 14:02:54.123817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.256 [2024-12-11 14:02:54.123844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.256 [2024-12-11 14:02:54.123883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.256 [2024-12-11 14:02:54.123894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:01.256 [2024-12-11 14:02:54.123904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.256 [2024-12-11 14:02:54.123913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.256 [2024-12-11 14:02:54.240478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.256 [2024-12-11 14:02:54.240527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:01.256 [2024-12-11 14:02:54.240541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.256 [2024-12-11 14:02:54.240567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.514 [2024-12-11 14:02:54.335554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.514 [2024-12-11 14:02:54.335600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.514 [2024-12-11 14:02:54.335613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.514 [2024-12-11 14:02:54.335639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.514 [2024-12-11 14:02:54.335698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.514 [2024-12-11 14:02:54.335709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.514 [2024-12-11 14:02:54.335720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.514 [2024-12-11 14:02:54.335730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.514 [2024-12-11 14:02:54.335758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.514 [2024-12-11 14:02:54.335780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.514 [2024-12-11 14:02:54.335790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.514 [2024-12-11 14:02:54.335800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.514 [2024-12-11 14:02:54.335935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.514 [2024-12-11 14:02:54.335965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.514 [2024-12-11 14:02:54.335976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.514 [2024-12-11 14:02:54.335986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.514 [2024-12-11 14:02:54.336023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.514 [2024-12-11 14:02:54.336036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:01.514 [2024-12-11 14:02:54.336055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.514 [2024-12-11 14:02:54.336065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.514 [2024-12-11 14:02:54.336103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.514 [2024-12-11 14:02:54.336115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.514 [2024-12-11 14:02:54.336125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.514 [2024-12-11 14:02:54.336135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.514 [2024-12-11 14:02:54.336176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.514 [2024-12-11 14:02:54.336196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.514 [2024-12-11 14:02:54.336206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.514 [2024-12-11 14:02:54.336216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.514 [2024-12-11 14:02:54.336363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 494.666 ms, result 0 00:22:02.449 00:22:02.449 00:22:02.449 14:02:55 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:02.449 14:02:55 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:03.017 14:02:55 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:03.017 [2024-12-11 14:02:55.905493] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:03.017 [2024-12-11 14:02:55.905610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79753 ] 00:22:03.278 [2024-12-11 14:02:56.086809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.278 [2024-12-11 14:02:56.192392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.536 [2024-12-11 14:02:56.545414] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:03.536 [2024-12-11 14:02:56.545481] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:03.796 [2024-12-11 14:02:56.706998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.707213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:03.796 [2024-12-11 14:02:56.707238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:03.796 [2024-12-11 14:02:56.707250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.710383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.710548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.796 [2024-12-11 14:02:56.710569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:22:03.796 [2024-12-11 14:02:56.710580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.710685] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:03.796 [2024-12-11 14:02:56.711663] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:03.796 [2024-12-11 14:02:56.711698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.711709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.796 [2024-12-11 14:02:56.711720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:22:03.796 [2024-12-11 14:02:56.711730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.713193] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:03.796 [2024-12-11 14:02:56.731753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.731791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:03.796 [2024-12-11 14:02:56.731804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.591 ms 00:22:03.796 [2024-12-11 14:02:56.731814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.731954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.731969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:03.796 [2024-12-11 14:02:56.731981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:03.796 [2024-12-11 14:02:56.731990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.738605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.738632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.796 [2024-12-11 14:02:56.738643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.581 ms 00:22:03.796 [2024-12-11 14:02:56.738668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.738772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.738786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.796 [2024-12-11 14:02:56.738797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:03.796 [2024-12-11 14:02:56.738807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.738864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.738876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:03.796 [2024-12-11 14:02:56.738902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:03.796 [2024-12-11 14:02:56.738912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.738934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:03.796 [2024-12-11 14:02:56.743495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.743528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.796 [2024-12-11 14:02:56.743540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.572 ms 00:22:03.796 [2024-12-11 14:02:56.743549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.743619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.743632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:03.796 [2024-12-11 14:02:56.743643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:03.796 [2024-12-11 14:02:56.743652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.743677] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:03.796 [2024-12-11 14:02:56.743698] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:03.796 [2024-12-11 14:02:56.743732] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:03.796 [2024-12-11 14:02:56.743749] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:03.796 [2024-12-11 14:02:56.743854] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:03.796 [2024-12-11 14:02:56.743885] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:03.796 [2024-12-11 14:02:56.743898] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:03.796 [2024-12-11 14:02:56.743915] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:03.796 [2024-12-11 14:02:56.743928] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:03.796 [2024-12-11 14:02:56.743939] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:03.796 [2024-12-11 14:02:56.743949] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:03.796 [2024-12-11 14:02:56.743958] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:03.796 [2024-12-11 14:02:56.743968] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:03.796 [2024-12-11 14:02:56.743978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.743988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:03.796 [2024-12-11 14:02:56.743999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:22:03.796 [2024-12-11 14:02:56.744008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.744085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.796 [2024-12-11 14:02:56.744100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:03.796 [2024-12-11 14:02:56.744110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:03.796 [2024-12-11 14:02:56.744119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.796 [2024-12-11 14:02:56.744206] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:03.797 [2024-12-11 14:02:56.744218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:03.797 [2024-12-11 14:02:56.744228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.797 [2024-12-11 14:02:56.744239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:03.797 [2024-12-11 14:02:56.744259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:03.797 [2024-12-11 14:02:56.744288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:03.797 [2024-12-11 14:02:56.744297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.797 [2024-12-11 14:02:56.744331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:03.797 [2024-12-11 14:02:56.744353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:03.797 [2024-12-11 14:02:56.744362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.797 [2024-12-11 14:02:56.744371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:03.797 [2024-12-11 14:02:56.744380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:03.797 [2024-12-11 14:02:56.744389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:03.797 [2024-12-11 14:02:56.744407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:03.797 [2024-12-11 14:02:56.744416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:03.797 [2024-12-11 14:02:56.744435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.797 [2024-12-11 14:02:56.744452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:03.797 [2024-12-11 14:02:56.744461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.797 [2024-12-11 14:02:56.744479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:03.797 [2024-12-11 14:02:56.744488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.797 [2024-12-11 14:02:56.744506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:03.797 [2024-12-11 14:02:56.744515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.797 [2024-12-11 14:02:56.744533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:03.797 [2024-12-11 14:02:56.744542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.797 [2024-12-11 14:02:56.744559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:03.797 [2024-12-11 14:02:56.744568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:03.797 [2024-12-11 14:02:56.744577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.797 [2024-12-11 14:02:56.744586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:03.797 [2024-12-11 14:02:56.744594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:03.797 [2024-12-11 14:02:56.744603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:03.797 [2024-12-11 14:02:56.744620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:03.797 [2024-12-11 14:02:56.744631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744640] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:03.797 [2024-12-11 14:02:56.744650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:03.797 [2024-12-11 14:02:56.744664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.797 [2024-12-11 14:02:56.744673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.797 [2024-12-11 14:02:56.744683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:03.797 [2024-12-11 14:02:56.744692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:03.797 [2024-12-11 14:02:56.744702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:03.797 [2024-12-11 14:02:56.744711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:03.797 [2024-12-11 14:02:56.744720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:03.797 [2024-12-11 14:02:56.744729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:03.797 [2024-12-11 14:02:56.744740] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:03.797 [2024-12-11 14:02:56.744751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.797 [2024-12-11 14:02:56.744763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:03.797 [2024-12-11 14:02:56.744773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:03.797 [2024-12-11 14:02:56.744783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:03.797 [2024-12-11 14:02:56.744793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:03.797 [2024-12-11 14:02:56.744804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:03.797 [2024-12-11 14:02:56.744814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:03.797 [2024-12-11 14:02:56.744824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:03.797 [2024-12-11 14:02:56.744834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:03.797 [2024-12-11 14:02:56.744844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:03.797 [2024-12-11 14:02:56.744855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:03.797 [2024-12-11 14:02:56.744876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:03.797 [2024-12-11 14:02:56.744887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:03.797 [2024-12-11 14:02:56.744897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:03.797 [2024-12-11 14:02:56.744907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:03.797 [2024-12-11 14:02:56.744917] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:03.797 [2024-12-11 14:02:56.744928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.797 [2024-12-11 14:02:56.744939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:03.797 [2024-12-11 14:02:56.744949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:03.797 [2024-12-11 14:02:56.744959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:03.797 [2024-12-11 14:02:56.744970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:03.797 [2024-12-11 14:02:56.744981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.797 [2024-12-11 14:02:56.744996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:03.797 [2024-12-11 14:02:56.745006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:22:03.797 [2024-12-11 14:02:56.745016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.797 [2024-12-11 14:02:56.783356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.797 [2024-12-11 14:02:56.783410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.797 [2024-12-11 14:02:56.783424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.346 ms 00:22:03.797 [2024-12-11 14:02:56.783435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.797 [2024-12-11 14:02:56.783554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.797 [2024-12-11 14:02:56.783567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:03.797 [2024-12-11 14:02:56.783578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:03.797 [2024-12-11 14:02:56.783587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.056 [2024-12-11 14:02:56.848060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.056 [2024-12-11 14:02:56.848098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.056 [2024-12-11 14:02:56.848115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.554 ms 00:22:04.056 [2024-12-11 14:02:56.848125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.056 [2024-12-11 14:02:56.848216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.056 [2024-12-11 14:02:56.848230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:04.056 [2024-12-11 14:02:56.848241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:04.056 [2024-12-11 14:02:56.848251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.056 [2024-12-11 14:02:56.848691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.056 [2024-12-11 14:02:56.848704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:04.057 [2024-12-11 14:02:56.848715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:22:04.057 [2024-12-11 14:02:56.848729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:56.848864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:56.848878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:04.057 [2024-12-11 14:02:56.848889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:22:04.057 [2024-12-11 14:02:56.848898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:56.867905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:56.867939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:04.057 [2024-12-11 14:02:56.867952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.015 ms 00:22:04.057 [2024-12-11 14:02:56.867963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:56.886489] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:04.057 [2024-12-11 14:02:56.886668] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:04.057 [2024-12-11 14:02:56.886688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:56.886699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:04.057 [2024-12-11 14:02:56.886710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.653 ms 00:22:04.057 [2024-12-11 14:02:56.886720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:56.916612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:56.916651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:04.057 [2024-12-11 14:02:56.916664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.861 ms 00:22:04.057 [2024-12-11 14:02:56.916690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:56.934290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:56.934336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:04.057 [2024-12-11 14:02:56.934349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.550 ms 00:22:04.057 [2024-12-11 14:02:56.934358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:56.951590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:56.951626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:04.057 [2024-12-11 14:02:56.951639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.170 ms 00:22:04.057 [2024-12-11 14:02:56.951648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:56.952408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:56.952440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:04.057 [2024-12-11 14:02:56.952452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:22:04.057 [2024-12-11 14:02:56.952462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:57.038488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:57.038551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:04.057 [2024-12-11 14:02:57.038569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.135 ms 00:22:04.057 [2024-12-11 14:02:57.038595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:57.049243] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:04.057 [2024-12-11 14:02:57.065323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:57.065536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:04.057 [2024-12-11 14:02:57.065568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.664 ms 00:22:04.057 [2024-12-11 14:02:57.065579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:57.065708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:57.065721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:04.057 [2024-12-11 14:02:57.065733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:04.057 [2024-12-11 14:02:57.065742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:57.065797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:57.065809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:04.057 [2024-12-11 14:02:57.065850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:04.057 [2024-12-11 14:02:57.065864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:57.065899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:57.065913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:04.057 [2024-12-11 14:02:57.065923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:04.057 [2024-12-11 14:02:57.065933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:57.065969] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:04.057 [2024-12-11 14:02:57.065981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:57.065992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:04.057 [2024-12-11 14:02:57.066001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:04.057 [2024-12-11 14:02:57.066012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:57.101302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:57.101338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:04.057 [2024-12-11 14:02:57.101352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.320 ms 00:22:04.057 [2024-12-11 14:02:57.101378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.057 [2024-12-11 14:02:57.101489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.057 [2024-12-11 14:02:57.101503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:04.057 [2024-12-11 14:02:57.101513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:04.057 [2024-12-11 14:02:57.101529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.316 [2024-12-11 14:02:57.102432] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:04.316 [2024-12-11 14:02:57.106589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.811 ms, result 0 00:22:04.316 [2024-12-11 14:02:57.107408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:04.316 [2024-12-11 14:02:57.125345] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:04.316  [2024-12-11T14:02:57.363Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-12-11 14:02:57.306405] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:04.316 [2024-12-11 14:02:57.319955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.316 [2024-12-11 14:02:57.319996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:04.316 [2024-12-11 14:02:57.320009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:04.316 [2024-12-11 14:02:57.320035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.316 [2024-12-11 14:02:57.320057] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:04.316 [2024-12-11 14:02:57.324044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.316 [2024-12-11 14:02:57.324072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:04.316 [2024-12-11 14:02:57.324083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.979 ms 00:22:04.316 [2024-12-11 14:02:57.324093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.316 [2024-12-11 14:02:57.326011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.316 [2024-12-11 14:02:57.326152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:04.316 [2024-12-11 14:02:57.326172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.882 ms 00:22:04.316 [2024-12-11 14:02:57.326189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.316 [2024-12-11 14:02:57.329396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.316 [2024-12-11 14:02:57.329514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:04.316 [2024-12-11 14:02:57.329533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:22:04.316 [2024-12-11 14:02:57.329544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.316 [2024-12-11 14:02:57.335077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.316 [2024-12-11 14:02:57.335110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:04.316 [2024-12-11 14:02:57.335122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.506 ms 00:22:04.316 [2024-12-11 14:02:57.335146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.576 [2024-12-11 14:02:57.370567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.576 [2024-12-11 14:02:57.370705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:04.576 [2024-12-11 14:02:57.370725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.420 ms 00:22:04.576 [2024-12-11 14:02:57.370752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.576 [2024-12-11 14:02:57.391040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.576 [2024-12-11 14:02:57.391084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:04.576 [2024-12-11 14:02:57.391097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.254 ms 00:22:04.576 [2024-12-11 14:02:57.391122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.576 [2024-12-11 14:02:57.391273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.576 [2024-12-11 14:02:57.391286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:04.576 [2024-12-11 14:02:57.391308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:04.576 [2024-12-11 14:02:57.391317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.576 [2024-12-11 14:02:57.426928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.576 [2024-12-11 14:02:57.426965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:04.576 [2024-12-11 14:02:57.426978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.651 ms 00:22:04.576 [2024-12-11 14:02:57.426987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.576 [2024-12-11 14:02:57.461174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.576 [2024-12-11 14:02:57.461209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:04.576 [2024-12-11 14:02:57.461221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.189 ms 00:22:04.576 [2024-12-11 14:02:57.461230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.576 [2024-12-11 14:02:57.494885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.576 [2024-12-11 14:02:57.494920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:04.577 [2024-12-11 14:02:57.494932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.658 ms 00:22:04.577 [2024-12-11 14:02:57.494957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.577 [2024-12-11 14:02:57.528948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.577 [2024-12-11 14:02:57.528982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:04.577 [2024-12-11 14:02:57.528994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.969 ms 00:22:04.577 [2024-12-11 14:02:57.529002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.577 [2024-12-11 14:02:57.529052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:04.577 [2024-12-11 14:02:57.529068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:04.577 [2024-12-11 14:02:57.529788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.529988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.530010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.530020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.530031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.530041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.530051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.530061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.530071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:04.578 [2024-12-11 14:02:57.530095] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:04.578 [2024-12-11 14:02:57.530104] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1fac6d1-37a7-4b75-a43a-f4195852c0c7 00:22:04.578 [2024-12-11 14:02:57.530115] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:04.578 [2024-12-11 14:02:57.530124] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:04.578 [2024-12-11 14:02:57.530134] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:04.578 [2024-12-11 14:02:57.530143] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:04.578 [2024-12-11 14:02:57.530152] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:04.578 [2024-12-11 14:02:57.530166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:04.578 [2024-12-11 14:02:57.530175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:04.578 [2024-12-11 14:02:57.530184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:04.578 [2024-12-11 14:02:57.530193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:04.578 [2024-12-11 14:02:57.530202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.578 [2024-12-11 14:02:57.530211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:04.578 [2024-12-11 14:02:57.530221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:22:04.578 [2024-12-11 14:02:57.530230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.578 [2024-12-11 14:02:57.549537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.578 [2024-12-11 14:02:57.549570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:04.578 [2024-12-11 14:02:57.549581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.318 ms 00:22:04.578 [2024-12-11 14:02:57.549596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.578 [2024-12-11 14:02:57.550168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.578 [2024-12-11 14:02:57.550181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:04.578 [2024-12-11 14:02:57.550192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:22:04.578 [2024-12-11 14:02:57.550202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.578 [2024-12-11 14:02:57.603050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.578 [2024-12-11 14:02:57.603209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:04.578 [2024-12-11 14:02:57.603251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.578 [2024-12-11 14:02:57.603261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.578 [2024-12-11 14:02:57.603346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.578 [2024-12-11 14:02:57.603358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:04.578 [2024-12-11 14:02:57.603378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.578 [2024-12-11 14:02:57.603388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.578 [2024-12-11 14:02:57.603439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.578 [2024-12-11 14:02:57.603451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:04.578 [2024-12-11 14:02:57.603461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.578 [2024-12-11 14:02:57.603475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.578 [2024-12-11 14:02:57.603494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.578 [2024-12-11 14:02:57.603504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:04.578 [2024-12-11 14:02:57.603514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.578 [2024-12-11 14:02:57.603523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.838 [2024-12-11 14:02:57.722067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.838 [2024-12-11 14:02:57.722290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.838 [2024-12-11 14:02:57.722311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.838 [2024-12-11 14:02:57.722329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.838 [2024-12-11 14:02:57.818878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.838 [2024-12-11 14:02:57.818921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:04.838 [2024-12-11 14:02:57.818936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.839 [2024-12-11 14:02:57.818946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.839 [2024-12-11 14:02:57.819007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.839 [2024-12-11 14:02:57.819019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:04.839 [2024-12-11 14:02:57.819030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.839 [2024-12-11 14:02:57.819040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.839 [2024-12-11 14:02:57.819075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.839 [2024-12-11 14:02:57.819086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:04.839 [2024-12-11 14:02:57.819097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.839 [2024-12-11 14:02:57.819107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.839 [2024-12-11 14:02:57.819203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.839 [2024-12-11 14:02:57.819217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:04.839 [2024-12-11 14:02:57.819228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.839 [2024-12-11 14:02:57.819238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.839 [2024-12-11 14:02:57.819275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.839 [2024-12-11 14:02:57.819292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:04.839 [2024-12-11 14:02:57.819303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.839 [2024-12-11 14:02:57.819313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.839 [2024-12-11 14:02:57.819351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.839 [2024-12-11 14:02:57.819362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:04.839 [2024-12-11 14:02:57.819373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.839 [2024-12-11 14:02:57.819383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.839 [2024-12-11 14:02:57.819426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.839 [2024-12-11 14:02:57.819438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:04.839 [2024-12-11 14:02:57.819448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.839 [2024-12-11 14:02:57.819458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.839 [2024-12-11 14:02:57.819592] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.441 ms, result 0 00:22:05.786 00:22:05.786 00:22:06.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.045 14:02:58 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79787 00:22:06.045 14:02:58 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:06.045 14:02:58 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79787 00:22:06.045 14:02:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79787 ']' 00:22:06.045 14:02:58 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.045 14:02:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.045 14:02:58 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.045 14:02:58 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.045 14:02:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:06.045 [2024-12-11 14:02:58.968443] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:06.045 [2024-12-11 14:02:58.968767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79787 ] 00:22:06.304 [2024-12-11 14:02:59.149941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.304 [2024-12-11 14:02:59.252032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.241 14:03:00 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.241 14:03:00 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:07.241 14:03:00 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:07.500 [2024-12-11 14:03:00.309099] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.500 [2024-12-11 14:03:00.309374] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.500 [2024-12-11 14:03:00.494333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.500 [2024-12-11 14:03:00.494381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:07.500 [2024-12-11 14:03:00.494400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:07.500 [2024-12-11 14:03:00.494411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.500 [2024-12-11 14:03:00.497493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.497529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.501 [2024-12-11 14:03:00.497544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:22:07.501 [2024-12-11 14:03:00.497554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.497669] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:07.501 [2024-12-11 14:03:00.498633] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:07.501 [2024-12-11 14:03:00.498656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.498667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.501 [2024-12-11 14:03:00.498680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:22:07.501 [2024-12-11 14:03:00.498690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.500329] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:07.501 [2024-12-11 14:03:00.519547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.519696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:07.501 [2024-12-11 14:03:00.519846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.254 ms 00:22:07.501 [2024-12-11 14:03:00.519892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.520011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.520246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:07.501 [2024-12-11 14:03:00.520289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:07.501 [2024-12-11 14:03:00.520323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.527060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.527202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.501 [2024-12-11 14:03:00.527286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.672 ms 00:22:07.501 [2024-12-11 14:03:00.527326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.527460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.527587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.501 [2024-12-11 14:03:00.527648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:07.501 [2024-12-11 14:03:00.527685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.527737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.527772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:07.501 [2024-12-11 14:03:00.527810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:07.501 [2024-12-11 14:03:00.527866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.527995] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:07.501 [2024-12-11 14:03:00.532896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.533025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.501 [2024-12-11 14:03:00.533050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.913 ms 00:22:07.501 [2024-12-11 14:03:00.533060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.533149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.533162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:07.501 [2024-12-11 14:03:00.533179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:07.501 [2024-12-11 14:03:00.533194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.533223] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:07.501 [2024-12-11 14:03:00.533250] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:07.501 [2024-12-11 14:03:00.533303] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:07.501 [2024-12-11 14:03:00.533323] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:07.501 [2024-12-11 14:03:00.533417] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:07.501 [2024-12-11 14:03:00.533430] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:07.501 [2024-12-11 14:03:00.533454] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:07.501 [2024-12-11 14:03:00.533468] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:07.501 [2024-12-11 14:03:00.533485] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:07.501 [2024-12-11 14:03:00.533497] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:07.501 [2024-12-11 14:03:00.533511] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:07.501 [2024-12-11 14:03:00.533522] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:07.501 [2024-12-11 14:03:00.533539] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:07.501 [2024-12-11 14:03:00.533549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.533562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:07.501 [2024-12-11 14:03:00.533572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:22:07.501 [2024-12-11 14:03:00.533584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.533662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.501 [2024-12-11 14:03:00.533675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:07.501 [2024-12-11 14:03:00.533686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:07.501 [2024-12-11 14:03:00.533697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.501 [2024-12-11 14:03:00.533785] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:07.501 [2024-12-11 14:03:00.533800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:07.501 [2024-12-11 14:03:00.533810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.501 [2024-12-11 14:03:00.533840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.501 [2024-12-11 14:03:00.533852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:07.501 [2024-12-11 14:03:00.533865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:07.501 [2024-12-11 14:03:00.533875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:07.501 [2024-12-11 14:03:00.533890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:07.501 [2024-12-11 14:03:00.533900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:07.501 [2024-12-11 14:03:00.533912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.501 [2024-12-11 14:03:00.533921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:07.501 [2024-12-11 14:03:00.533934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:07.501 [2024-12-11 14:03:00.533943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.501 [2024-12-11 14:03:00.533955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:07.501 [2024-12-11 14:03:00.533964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:07.501 [2024-12-11 14:03:00.533975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.501 [2024-12-11 14:03:00.533985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:07.501 [2024-12-11 14:03:00.534012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:07.501 [2024-12-11 14:03:00.534030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.501 [2024-12-11 14:03:00.534043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:07.501 [2024-12-11 14:03:00.534056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:07.501 [2024-12-11 14:03:00.534068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.501 [2024-12-11 14:03:00.534077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:07.501 [2024-12-11 14:03:00.534099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:07.501 [2024-12-11 14:03:00.534108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.501 [2024-12-11 14:03:00.534120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:07.501 [2024-12-11 14:03:00.534129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:07.501 [2024-12-11 14:03:00.534141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.501 [2024-12-11 14:03:00.534150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:07.501 [2024-12-11 14:03:00.534163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:07.501 [2024-12-11 14:03:00.534172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.501 [2024-12-11 14:03:00.534184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:07.501 [2024-12-11 14:03:00.534193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:07.501 [2024-12-11 14:03:00.534205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.501 [2024-12-11 14:03:00.534214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:07.501 [2024-12-11 14:03:00.534225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:07.501 [2024-12-11 14:03:00.534234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.501 [2024-12-11 14:03:00.534246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:07.501 [2024-12-11 14:03:00.534255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:07.501 [2024-12-11 14:03:00.534268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.501 [2024-12-11 14:03:00.534277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:07.501 [2024-12-11 14:03:00.534288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:07.501 [2024-12-11 14:03:00.534297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.501 [2024-12-11 14:03:00.534309] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:07.501 [2024-12-11 14:03:00.534324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:07.501 [2024-12-11 14:03:00.534336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.501 [2024-12-11 14:03:00.534345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.502 [2024-12-11 14:03:00.534358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:07.502 [2024-12-11 14:03:00.534367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:07.502 [2024-12-11 14:03:00.534379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:07.502 [2024-12-11 14:03:00.534388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:07.502 [2024-12-11 14:03:00.534399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:07.502 [2024-12-11 14:03:00.534408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:07.502 [2024-12-11 14:03:00.534422] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:07.502 [2024-12-11 14:03:00.534434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.502 [2024-12-11 14:03:00.534452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:07.502 [2024-12-11 14:03:00.534463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:07.502 [2024-12-11 14:03:00.534476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:07.502 [2024-12-11 14:03:00.534486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:07.502 [2024-12-11 14:03:00.534499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:07.502 [2024-12-11 14:03:00.534510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:07.502 [2024-12-11 14:03:00.534522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:07.502 [2024-12-11 14:03:00.534532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:07.502 [2024-12-11 14:03:00.534545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:07.502 [2024-12-11 14:03:00.534555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:07.502 [2024-12-11 14:03:00.534568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:07.502 [2024-12-11 14:03:00.534578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:07.502 [2024-12-11 14:03:00.534591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:07.502 [2024-12-11 14:03:00.534601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:07.502 [2024-12-11 14:03:00.534614] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:07.502 [2024-12-11 14:03:00.534626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.502 [2024-12-11 14:03:00.534641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:07.502 [2024-12-11 14:03:00.534652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:07.502 [2024-12-11 14:03:00.534664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:07.502 [2024-12-11 14:03:00.534674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:07.502 [2024-12-11 14:03:00.534687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.502 [2024-12-11 14:03:00.534698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:07.502 [2024-12-11 14:03:00.534710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:22:07.502 [2024-12-11 14:03:00.534723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.761 [2024-12-11 14:03:00.573083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.761 [2024-12-11 14:03:00.573121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.762 [2024-12-11 14:03:00.573141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.358 ms 00:22:07.762 [2024-12-11 14:03:00.573157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.573284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.573301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:07.762 [2024-12-11 14:03:00.573318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:07.762 [2024-12-11 14:03:00.573328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.620280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.620434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.762 [2024-12-11 14:03:00.620464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.001 ms 00:22:07.762 [2024-12-11 14:03:00.620476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.620572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.620585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.762 [2024-12-11 14:03:00.620601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:07.762 [2024-12-11 14:03:00.620612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.621067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.621081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.762 [2024-12-11 14:03:00.621102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:22:07.762 [2024-12-11 14:03:00.621113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.621236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.621251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.762 [2024-12-11 14:03:00.621266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:07.762 [2024-12-11 14:03:00.621277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.643719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.643872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.762 [2024-12-11 14:03:00.643902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.448 ms 00:22:07.762 [2024-12-11 14:03:00.643914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.691600] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:07.762 [2024-12-11 14:03:00.691641] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:07.762 [2024-12-11 14:03:00.691662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.691674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:07.762 [2024-12-11 14:03:00.691690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.699 ms 00:22:07.762 [2024-12-11 14:03:00.691711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.721326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.721367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:07.762 [2024-12-11 14:03:00.721387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.574 ms 00:22:07.762 [2024-12-11 14:03:00.721414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.739328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.739366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:07.762 [2024-12-11 14:03:00.739389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.833 ms 00:22:07.762 [2024-12-11 14:03:00.739398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.757011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.757046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:07.762 [2024-12-11 14:03:00.757065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.560 ms 00:22:07.762 [2024-12-11 14:03:00.757075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.762 [2024-12-11 14:03:00.757793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.762 [2024-12-11 14:03:00.757817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:07.762 [2024-12-11 14:03:00.757852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:22:07.762 [2024-12-11 14:03:00.757879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.021 [2024-12-11 14:03:00.843487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.021 [2024-12-11 14:03:00.843739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:08.021 [2024-12-11 14:03:00.843769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.716 ms 00:22:08.021 [2024-12-11 14:03:00.843781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.021 [2024-12-11 14:03:00.854744] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:08.021 [2024-12-11 14:03:00.870220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.021 [2024-12-11 14:03:00.870271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:08.021 [2024-12-11 14:03:00.870306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.351 ms 00:22:08.021 [2024-12-11 14:03:00.870319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.021 [2024-12-11 14:03:00.870408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.021 [2024-12-11 14:03:00.870423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:08.021 [2024-12-11 14:03:00.870434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:08.021 [2024-12-11 14:03:00.870447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.021 [2024-12-11 14:03:00.870497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.021 [2024-12-11 14:03:00.870520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:08.021 [2024-12-11 14:03:00.870530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:08.021 [2024-12-11 14:03:00.870551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.021 [2024-12-11 14:03:00.870576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.021 [2024-12-11 14:03:00.870592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:08.021 [2024-12-11 14:03:00.870603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:08.021 [2024-12-11 14:03:00.870617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.021 [2024-12-11 14:03:00.870659] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:08.021 [2024-12-11 14:03:00.870681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.021 [2024-12-11 14:03:00.870697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:08.021 [2024-12-11 14:03:00.870712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:08.021 [2024-12-11 14:03:00.870723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.021 [2024-12-11 14:03:00.907283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.021 [2024-12-11 14:03:00.907322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:08.021 [2024-12-11 14:03:00.907342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.581 ms 00:22:08.021 [2024-12-11 14:03:00.907353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.021 [2024-12-11 14:03:00.907465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.021 [2024-12-11 14:03:00.907479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:08.021 [2024-12-11 14:03:00.907495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:08.021 [2024-12-11 14:03:00.907510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.021 [2024-12-11 14:03:00.908472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:08.021 [2024-12-11 14:03:00.912656] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 414.488 ms, result 0 00:22:08.021 [2024-12-11 14:03:00.913885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:08.021 Some configs were skipped because the RPC state that can call them passed over. 00:22:08.021 14:03:00 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:08.280 [2024-12-11 14:03:01.161883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.280 [2024-12-11 14:03:01.162095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:08.280 [2024-12-11 14:03:01.162181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:22:08.280 [2024-12-11 14:03:01.162228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.280 [2024-12-11 14:03:01.162302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.047 ms, result 0 00:22:08.280 true 00:22:08.280 14:03:01 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:08.539 [2024-12-11 14:03:01.369362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.539 [2024-12-11 14:03:01.369410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:08.539 [2024-12-11 14:03:01.369428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.180 ms 00:22:08.539 [2024-12-11 14:03:01.369439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.539 [2024-12-11 14:03:01.369481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.310 ms, result 0 00:22:08.539 true 00:22:08.539 14:03:01 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79787 00:22:08.539 14:03:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79787 ']' 00:22:08.539 14:03:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79787 00:22:08.539 14:03:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:08.539 14:03:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.539 14:03:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79787 00:22:08.539 14:03:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:08.539 14:03:01 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:08.539 killing process with pid 79787 00:22:08.539 14:03:01 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79787' 00:22:08.539 14:03:01 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79787 00:22:08.539 14:03:01 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79787 00:22:09.920 [2024-12-11 14:03:02.542351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.920 [2024-12-11 14:03:02.542415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:09.920 [2024-12-11 14:03:02.542432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:09.920 [2024-12-11 14:03:02.542445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.920 [2024-12-11 14:03:02.542472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:09.920 [2024-12-11 14:03:02.546595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.920 [2024-12-11 14:03:02.546641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:09.920 [2024-12-11 14:03:02.546661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.108 ms 00:22:09.920 [2024-12-11 14:03:02.546671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.920 [2024-12-11 14:03:02.546963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.920 [2024-12-11 14:03:02.546978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:09.920 [2024-12-11 14:03:02.546991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:22:09.920 [2024-12-11 14:03:02.547001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.920 [2024-12-11 14:03:02.550390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.920 [2024-12-11 14:03:02.550428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:09.920 [2024-12-11 14:03:02.550448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:22:09.920 [2024-12-11 14:03:02.550458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.920 [2024-12-11 14:03:02.556087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.920 [2024-12-11 14:03:02.556122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:09.920 [2024-12-11 14:03:02.556139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.595 ms 00:22:09.920 [2024-12-11 14:03:02.556165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.920 [2024-12-11 14:03:02.571107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.920 [2024-12-11 14:03:02.571152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:09.920 [2024-12-11 14:03:02.571171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.905 ms 00:22:09.920 [2024-12-11 14:03:02.571181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.920 [2024-12-11 14:03:02.581505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.920 [2024-12-11 14:03:02.581664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:09.920 [2024-12-11 14:03:02.581692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.268 ms 00:22:09.920 [2024-12-11 14:03:02.581702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.920 [2024-12-11 14:03:02.581921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.920 [2024-12-11 14:03:02.581938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:09.920 [2024-12-11 14:03:02.581951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:09.920 [2024-12-11 14:03:02.581961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.921 [2024-12-11 14:03:02.596655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.921 [2024-12-11 14:03:02.596688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:09.921 [2024-12-11 14:03:02.596706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.687 ms 00:22:09.921 [2024-12-11 14:03:02.596732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.921 [2024-12-11 14:03:02.611043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.921 [2024-12-11 14:03:02.611201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:09.921 [2024-12-11 14:03:02.611237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.276 ms 00:22:09.921 [2024-12-11 14:03:02.611247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.921 [2024-12-11 14:03:02.625593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.921 [2024-12-11 14:03:02.625744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:09.921 [2024-12-11 14:03:02.625770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.309 ms 00:22:09.921 [2024-12-11 14:03:02.625779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.921 [2024-12-11 14:03:02.640022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.921 [2024-12-11 14:03:02.640173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:09.921 [2024-12-11 14:03:02.640198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.172 ms 00:22:09.921 [2024-12-11 14:03:02.640208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.921 [2024-12-11 14:03:02.640280] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:09.921 [2024-12-11 14:03:02.640297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.640998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:09.921 [2024-12-11 14:03:02.641284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:09.922 [2024-12-11 14:03:02.641582] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:09.922 [2024-12-11 14:03:02.641600] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1fac6d1-37a7-4b75-a43a-f4195852c0c7 00:22:09.922 [2024-12-11 14:03:02.641614] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:09.922 [2024-12-11 14:03:02.641627] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:09.922 [2024-12-11 14:03:02.641637] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:09.922 [2024-12-11 14:03:02.641650] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:09.922 [2024-12-11 14:03:02.641659] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:09.922 [2024-12-11 14:03:02.641672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:09.922 [2024-12-11 14:03:02.641682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:09.922 [2024-12-11 14:03:02.641693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:09.922 [2024-12-11 14:03:02.641702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:09.922 [2024-12-11 14:03:02.641715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.922 [2024-12-11 14:03:02.641725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:09.922 [2024-12-11 14:03:02.641737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.444 ms 00:22:09.922 [2024-12-11 14:03:02.641747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.661311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.922 [2024-12-11 14:03:02.661345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:09.922 [2024-12-11 14:03:02.661384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.558 ms 00:22:09.922 [2024-12-11 14:03:02.661394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.662020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.922 [2024-12-11 14:03:02.662037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:09.922 [2024-12-11 14:03:02.662058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:22:09.922 [2024-12-11 14:03:02.662069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.730208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.730353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.922 [2024-12-11 14:03:02.730384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.730395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.730481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.730494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.922 [2024-12-11 14:03:02.730515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.730526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.730582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.730595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.922 [2024-12-11 14:03:02.730615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.730625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.730649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.730660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.922 [2024-12-11 14:03:02.730675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.730690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.851735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.851783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.922 [2024-12-11 14:03:02.851804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.851815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.953733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.953780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:09.922 [2024-12-11 14:03:02.953801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.953816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.953954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.953967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:09.922 [2024-12-11 14:03:02.953989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.953999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.954034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.954045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:09.922 [2024-12-11 14:03:02.954060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.954070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.954196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.954211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:09.922 [2024-12-11 14:03:02.954225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.954235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.954280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.954292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:09.922 [2024-12-11 14:03:02.954307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.954317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.954366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.954377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:09.922 [2024-12-11 14:03:02.954397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.954408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.954454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.922 [2024-12-11 14:03:02.954467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:09.922 [2024-12-11 14:03:02.954482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.922 [2024-12-11 14:03:02.954492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.922 [2024-12-11 14:03:02.954638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.919 ms, result 0 00:22:11.319 14:03:03 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:11.319 [2024-12-11 14:03:04.067643] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:11.319 [2024-12-11 14:03:04.067763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79850 ] 00:22:11.319 [2024-12-11 14:03:04.249881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.319 [2024-12-11 14:03:04.359988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.888 [2024-12-11 14:03:04.710560] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.888 [2024-12-11 14:03:04.710631] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.888 [2024-12-11 14:03:04.872673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.888 [2024-12-11 14:03:04.872726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:11.888 [2024-12-11 14:03:04.872741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:11.888 [2024-12-11 14:03:04.872752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.888 [2024-12-11 14:03:04.875901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.888 [2024-12-11 14:03:04.875940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.888 [2024-12-11 14:03:04.875952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.132 ms 00:22:11.888 [2024-12-11 14:03:04.875963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.888 [2024-12-11 14:03:04.876076] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:11.888 [2024-12-11 14:03:04.877097] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:11.888 [2024-12-11 14:03:04.877134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.888 [2024-12-11 14:03:04.877146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.888 [2024-12-11 14:03:04.877157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:22:11.888 [2024-12-11 14:03:04.877167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.888 [2024-12-11 14:03:04.878655] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:11.888 [2024-12-11 14:03:04.897682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.888 [2024-12-11 14:03:04.897723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:11.888 [2024-12-11 14:03:04.897737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.059 ms 00:22:11.888 [2024-12-11 14:03:04.897748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.888 [2024-12-11 14:03:04.897864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.888 [2024-12-11 14:03:04.897881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:11.888 [2024-12-11 14:03:04.897892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:11.888 [2024-12-11 14:03:04.897902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.888 [2024-12-11 14:03:04.904543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.888 [2024-12-11 14:03:04.904571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.888 [2024-12-11 14:03:04.904582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.608 ms 00:22:11.888 [2024-12-11 14:03:04.904591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.888 [2024-12-11 14:03:04.904706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.888 [2024-12-11 14:03:04.904720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.888 [2024-12-11 14:03:04.904731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:11.888 [2024-12-11 14:03:04.904741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.888 [2024-12-11 14:03:04.904772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.888 [2024-12-11 14:03:04.904783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:11.888 [2024-12-11 14:03:04.904793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:11.888 [2024-12-11 14:03:04.904803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.888 [2024-12-11 14:03:04.904825] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:11.888 [2024-12-11 14:03:04.909662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.889 [2024-12-11 14:03:04.909693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.889 [2024-12-11 14:03:04.909706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.850 ms 00:22:11.889 [2024-12-11 14:03:04.909715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.889 [2024-12-11 14:03:04.909783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.889 [2024-12-11 14:03:04.909796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:11.889 [2024-12-11 14:03:04.909807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:11.889 [2024-12-11 14:03:04.909816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.889 [2024-12-11 14:03:04.909874] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:11.889 [2024-12-11 14:03:04.909898] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:11.889 [2024-12-11 14:03:04.909932] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:11.889 [2024-12-11 14:03:04.909968] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:11.889 [2024-12-11 14:03:04.910055] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:11.889 [2024-12-11 14:03:04.910069] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:11.889 [2024-12-11 14:03:04.910088] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:11.889 [2024-12-11 14:03:04.910106] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910118] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910129] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:11.889 [2024-12-11 14:03:04.910139] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:11.889 [2024-12-11 14:03:04.910149] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:11.889 [2024-12-11 14:03:04.910158] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:11.889 [2024-12-11 14:03:04.910170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.889 [2024-12-11 14:03:04.910180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:11.889 [2024-12-11 14:03:04.910190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:22:11.889 [2024-12-11 14:03:04.910200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.889 [2024-12-11 14:03:04.910276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.889 [2024-12-11 14:03:04.910290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:11.889 [2024-12-11 14:03:04.910300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:11.889 [2024-12-11 14:03:04.910310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.889 [2024-12-11 14:03:04.910409] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:11.889 [2024-12-11 14:03:04.910425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:11.889 [2024-12-11 14:03:04.910436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:11.889 [2024-12-11 14:03:04.910466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:11.889 [2024-12-11 14:03:04.910495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.889 [2024-12-11 14:03:04.910514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:11.889 [2024-12-11 14:03:04.910536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:11.889 [2024-12-11 14:03:04.910545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.889 [2024-12-11 14:03:04.910554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:11.889 [2024-12-11 14:03:04.910564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:11.889 [2024-12-11 14:03:04.910573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:11.889 [2024-12-11 14:03:04.910592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:11.889 [2024-12-11 14:03:04.910639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:11.889 [2024-12-11 14:03:04.910669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:11.889 [2024-12-11 14:03:04.910697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:11.889 [2024-12-11 14:03:04.910724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:11.889 [2024-12-11 14:03:04.910752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.889 [2024-12-11 14:03:04.910770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:11.889 [2024-12-11 14:03:04.910779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:11.889 [2024-12-11 14:03:04.910794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.889 [2024-12-11 14:03:04.910809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:11.889 [2024-12-11 14:03:04.910837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:11.889 [2024-12-11 14:03:04.910847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:11.889 [2024-12-11 14:03:04.910865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:11.889 [2024-12-11 14:03:04.910876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910885] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:11.889 [2024-12-11 14:03:04.910895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:11.889 [2024-12-11 14:03:04.910910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.889 [2024-12-11 14:03:04.910930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:11.889 [2024-12-11 14:03:04.910940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:11.889 [2024-12-11 14:03:04.910950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:11.889 [2024-12-11 14:03:04.910959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:11.889 [2024-12-11 14:03:04.910969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:11.889 [2024-12-11 14:03:04.910978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:11.889 [2024-12-11 14:03:04.910989] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:11.889 [2024-12-11 14:03:04.911002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.889 [2024-12-11 14:03:04.911017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:11.889 [2024-12-11 14:03:04.911036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:11.889 [2024-12-11 14:03:04.911055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:11.889 [2024-12-11 14:03:04.911068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:11.889 [2024-12-11 14:03:04.911079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:11.889 [2024-12-11 14:03:04.911089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:11.889 [2024-12-11 14:03:04.911099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:11.889 [2024-12-11 14:03:04.911109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:11.889 [2024-12-11 14:03:04.911120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:11.889 [2024-12-11 14:03:04.911130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:11.889 [2024-12-11 14:03:04.911140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:11.889 [2024-12-11 14:03:04.911156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:11.889 [2024-12-11 14:03:04.911172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:11.889 [2024-12-11 14:03:04.911183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:11.889 [2024-12-11 14:03:04.911193] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:11.889 [2024-12-11 14:03:04.911204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.889 [2024-12-11 14:03:04.911216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:11.889 [2024-12-11 14:03:04.911227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:11.890 [2024-12-11 14:03:04.911237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:11.890 [2024-12-11 14:03:04.911248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:11.890 [2024-12-11 14:03:04.911261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.890 [2024-12-11 14:03:04.911279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:11.890 [2024-12-11 14:03:04.911292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:22:11.890 [2024-12-11 14:03:04.911308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.149 [2024-12-11 14:03:04.949252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.149 [2024-12-11 14:03:04.949486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:12.149 [2024-12-11 14:03:04.949511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.934 ms 00:22:12.149 [2024-12-11 14:03:04.949524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.149 [2024-12-11 14:03:04.949661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.149 [2024-12-11 14:03:04.949675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:12.149 [2024-12-11 14:03:04.949688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:12.149 [2024-12-11 14:03:04.949699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.149 [2024-12-11 14:03:05.001568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.149 [2024-12-11 14:03:05.001606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:12.149 [2024-12-11 14:03:05.001623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.929 ms 00:22:12.149 [2024-12-11 14:03:05.001634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.149 [2024-12-11 14:03:05.001726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.149 [2024-12-11 14:03:05.001739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:12.150 [2024-12-11 14:03:05.001751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:12.150 [2024-12-11 14:03:05.001761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.150 [2024-12-11 14:03:05.002237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.150 [2024-12-11 14:03:05.002252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:12.150 [2024-12-11 14:03:05.002264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:22:12.150 [2024-12-11 14:03:05.002278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.150 [2024-12-11 14:03:05.002418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.150 [2024-12-11 14:03:05.002434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:12.150 [2024-12-11 14:03:05.002446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:22:12.150 [2024-12-11 14:03:05.002456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.150 [2024-12-11 14:03:05.022114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.150 [2024-12-11 14:03:05.022151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:12.150 [2024-12-11 14:03:05.022165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.665 ms 00:22:12.150 [2024-12-11 14:03:05.022176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.150 [2024-12-11 14:03:05.041573] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:12.150 [2024-12-11 14:03:05.041610] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:12.150 [2024-12-11 14:03:05.041625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.150 [2024-12-11 14:03:05.041652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:12.150 [2024-12-11 14:03:05.041663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.370 ms 00:22:12.150 [2024-12-11 14:03:05.041673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.150 [2024-12-11 14:03:05.071573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.150 [2024-12-11 14:03:05.071734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:12.150 [2024-12-11 14:03:05.071756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.868 ms 00:22:12.150 [2024-12-11 14:03:05.071768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.150 [2024-12-11 14:03:05.090389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.150 [2024-12-11 14:03:05.090545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:12.150 [2024-12-11 14:03:05.090569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.557 ms 00:22:12.150 [2024-12-11 14:03:05.090581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.150 [2024-12-11 14:03:05.107802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.150 [2024-12-11 14:03:05.107857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:12.150 [2024-12-11 14:03:05.107870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.173 ms 00:22:12.150 [2024-12-11 14:03:05.107881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.150 [2024-12-11 14:03:05.108708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.150 [2024-12-11 14:03:05.108744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:12.150 [2024-12-11 14:03:05.108758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:22:12.150 [2024-12-11 14:03:05.108768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.150 [2024-12-11 14:03:05.192188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.150 [2024-12-11 14:03:05.192249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:12.150 [2024-12-11 14:03:05.192266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.524 ms 00:22:12.150 [2024-12-11 14:03:05.192293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.409 [2024-12-11 14:03:05.203400] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:12.410 [2024-12-11 14:03:05.219583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.410 [2024-12-11 14:03:05.219632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:12.410 [2024-12-11 14:03:05.219648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.234 ms 00:22:12.410 [2024-12-11 14:03:05.219664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.410 [2024-12-11 14:03:05.219792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.410 [2024-12-11 14:03:05.219805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:12.410 [2024-12-11 14:03:05.219817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:12.410 [2024-12-11 14:03:05.219844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.410 [2024-12-11 14:03:05.219917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.410 [2024-12-11 14:03:05.219928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:12.410 [2024-12-11 14:03:05.219939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:12.410 [2024-12-11 14:03:05.219972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.410 [2024-12-11 14:03:05.220008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.410 [2024-12-11 14:03:05.220022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:12.410 [2024-12-11 14:03:05.220033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:12.410 [2024-12-11 14:03:05.220043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.410 [2024-12-11 14:03:05.220079] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:12.410 [2024-12-11 14:03:05.220091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.410 [2024-12-11 14:03:05.220101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:12.410 [2024-12-11 14:03:05.220112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:12.410 [2024-12-11 14:03:05.220121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.410 [2024-12-11 14:03:05.256519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.410 [2024-12-11 14:03:05.256558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:12.410 [2024-12-11 14:03:05.256573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.431 ms 00:22:12.410 [2024-12-11 14:03:05.256583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.410 [2024-12-11 14:03:05.256693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.410 [2024-12-11 14:03:05.256707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:12.410 [2024-12-11 14:03:05.256718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:12.410 [2024-12-11 14:03:05.256728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.410 [2024-12-11 14:03:05.257638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:12.410 [2024-12-11 14:03:05.261724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 385.312 ms, result 0 00:22:12.410 [2024-12-11 14:03:05.262618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:12.410 [2024-12-11 14:03:05.281281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:13.347  [2024-12-11T14:03:07.772Z] Copying: 26/256 [MB] (26 MBps) [2024-12-11T14:03:08.709Z] Copying: 50/256 [MB] (24 MBps) [2024-12-11T14:03:09.650Z] Copying: 75/256 [MB] (24 MBps) [2024-12-11T14:03:10.597Z] Copying: 98/256 [MB] (23 MBps) [2024-12-11T14:03:11.532Z] Copying: 122/256 [MB] (23 MBps) [2024-12-11T14:03:12.468Z] Copying: 146/256 [MB] (23 MBps) [2024-12-11T14:03:13.405Z] Copying: 170/256 [MB] (24 MBps) [2024-12-11T14:03:14.337Z] Copying: 195/256 [MB] (24 MBps) [2024-12-11T14:03:15.716Z] Copying: 219/256 [MB] (24 MBps) [2024-12-11T14:03:15.974Z] Copying: 243/256 [MB] (24 MBps) [2024-12-11T14:03:16.543Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-11 14:03:16.252427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:23.496 [2024-12-11 14:03:16.284148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.496 [2024-12-11 14:03:16.284372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:23.496 [2024-12-11 14:03:16.284414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:23.496 [2024-12-11 14:03:16.284428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.496 [2024-12-11 14:03:16.284475] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:23.496 [2024-12-11 14:03:16.289685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.496 [2024-12-11 14:03:16.289737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:23.496 [2024-12-11 14:03:16.289749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.195 ms 00:22:23.496 [2024-12-11 14:03:16.289775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.496 [2024-12-11 14:03:16.290047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.496 [2024-12-11 14:03:16.290061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:23.496 [2024-12-11 14:03:16.290072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:22:23.496 [2024-12-11 14:03:16.290090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.496 [2024-12-11 14:03:16.292906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.496 [2024-12-11 14:03:16.292929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:23.496 [2024-12-11 14:03:16.292941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:22:23.496 [2024-12-11 14:03:16.292951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.496 [2024-12-11 14:03:16.298288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.496 [2024-12-11 14:03:16.298323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:23.496 [2024-12-11 14:03:16.298334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.306 ms 00:22:23.496 [2024-12-11 14:03:16.298344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.496 [2024-12-11 14:03:16.332563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.496 [2024-12-11 14:03:16.332601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:23.496 [2024-12-11 14:03:16.332614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.212 ms 00:22:23.496 [2024-12-11 14:03:16.332624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.496 [2024-12-11 14:03:16.352292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.496 [2024-12-11 14:03:16.352330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:23.496 [2024-12-11 14:03:16.352349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.639 ms 00:22:23.496 [2024-12-11 14:03:16.352358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.496 [2024-12-11 14:03:16.352509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.496 [2024-12-11 14:03:16.352522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:23.496 [2024-12-11 14:03:16.352543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:23.496 [2024-12-11 14:03:16.352553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.497 [2024-12-11 14:03:16.387352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.497 [2024-12-11 14:03:16.387389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:23.497 [2024-12-11 14:03:16.387401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.838 ms 00:22:23.497 [2024-12-11 14:03:16.387426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.497 [2024-12-11 14:03:16.422648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.497 [2024-12-11 14:03:16.422685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:23.497 [2024-12-11 14:03:16.422697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.223 ms 00:22:23.497 [2024-12-11 14:03:16.422707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.497 [2024-12-11 14:03:16.457026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.497 [2024-12-11 14:03:16.457190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:23.497 [2024-12-11 14:03:16.457212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.318 ms 00:22:23.497 [2024-12-11 14:03:16.457222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.497 [2024-12-11 14:03:16.491116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.497 [2024-12-11 14:03:16.491151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:23.497 [2024-12-11 14:03:16.491164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.851 ms 00:22:23.497 [2024-12-11 14:03:16.491173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.497 [2024-12-11 14:03:16.491226] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:23.497 [2024-12-11 14:03:16.491242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.491993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.492003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.492013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.492024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.492034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:23.497 [2024-12-11 14:03:16.492044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:23.498 [2024-12-11 14:03:16.492297] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:23.498 [2024-12-11 14:03:16.492306] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1fac6d1-37a7-4b75-a43a-f4195852c0c7 00:22:23.498 [2024-12-11 14:03:16.492316] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:23.498 [2024-12-11 14:03:16.492326] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:23.498 [2024-12-11 14:03:16.492341] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:23.498 [2024-12-11 14:03:16.492374] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:23.498 [2024-12-11 14:03:16.492387] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:23.498 [2024-12-11 14:03:16.492401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:23.498 [2024-12-11 14:03:16.492420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:23.498 [2024-12-11 14:03:16.492434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:23.498 [2024-12-11 14:03:16.492443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:23.498 [2024-12-11 14:03:16.492453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.498 [2024-12-11 14:03:16.492463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:23.498 [2024-12-11 14:03:16.492473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:22:23.498 [2024-12-11 14:03:16.492483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.498 [2024-12-11 14:03:16.511507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.498 [2024-12-11 14:03:16.511541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:23.498 [2024-12-11 14:03:16.511553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.032 ms 00:22:23.498 [2024-12-11 14:03:16.511579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.498 [2024-12-11 14:03:16.512116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.498 [2024-12-11 14:03:16.512129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:23.498 [2024-12-11 14:03:16.512139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:22:23.498 [2024-12-11 14:03:16.512149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.563078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.563112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:23.757 [2024-12-11 14:03:16.563124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.757 [2024-12-11 14:03:16.563139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.563219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.563231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:23.757 [2024-12-11 14:03:16.563241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.757 [2024-12-11 14:03:16.563251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.563295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.563307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:23.757 [2024-12-11 14:03:16.563316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.757 [2024-12-11 14:03:16.563325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.563347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.563357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:23.757 [2024-12-11 14:03:16.563366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.757 [2024-12-11 14:03:16.563375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.678287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.678339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:23.757 [2024-12-11 14:03:16.678352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.757 [2024-12-11 14:03:16.678362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.773002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.773238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.757 [2024-12-11 14:03:16.773262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.757 [2024-12-11 14:03:16.773274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.773338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.773349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.757 [2024-12-11 14:03:16.773360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.757 [2024-12-11 14:03:16.773370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.773399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.773416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.757 [2024-12-11 14:03:16.773426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.757 [2024-12-11 14:03:16.773436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.773554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.773567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.757 [2024-12-11 14:03:16.773577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.757 [2024-12-11 14:03:16.773588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.773623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.773635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:23.757 [2024-12-11 14:03:16.773650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.757 [2024-12-11 14:03:16.773660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.757 [2024-12-11 14:03:16.773699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.757 [2024-12-11 14:03:16.773709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.757 [2024-12-11 14:03:16.773719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.758 [2024-12-11 14:03:16.773729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.758 [2024-12-11 14:03:16.773771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.758 [2024-12-11 14:03:16.773787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.758 [2024-12-11 14:03:16.773799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.758 [2024-12-11 14:03:16.773809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.758 [2024-12-11 14:03:16.773988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 490.655 ms, result 0 00:22:25.136 00:22:25.136 00:22:25.136 14:03:17 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:25.395 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:25.395 14:03:18 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:25.395 14:03:18 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:25.395 14:03:18 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:25.395 14:03:18 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:25.395 14:03:18 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:25.395 14:03:18 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:25.395 14:03:18 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79787 00:22:25.395 Process with pid 79787 is not found 00:22:25.395 14:03:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79787 ']' 00:22:25.395 14:03:18 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79787 00:22:25.395 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79787) - No such process 00:22:25.395 14:03:18 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79787 is not found' 00:22:25.395 00:22:25.395 real 1m11.184s 00:22:25.395 user 1m36.797s 00:22:25.395 sys 0m6.799s 00:22:25.395 14:03:18 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.395 14:03:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:25.395 ************************************ 00:22:25.395 END TEST ftl_trim 00:22:25.395 ************************************ 00:22:25.395 14:03:18 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:25.395 14:03:18 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:25.395 14:03:18 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.395 14:03:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:25.395 ************************************ 00:22:25.395 START TEST ftl_restore 00:22:25.395 ************************************ 00:22:25.395 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:25.654 * Looking for test storage... 00:22:25.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:25.654 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:25.654 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:22:25.654 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:25.654 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:25.654 14:03:18 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:25.655 14:03:18 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:25.655 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.655 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:25.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.655 --rc genhtml_branch_coverage=1 00:22:25.655 --rc genhtml_function_coverage=1 00:22:25.655 --rc genhtml_legend=1 00:22:25.655 --rc geninfo_all_blocks=1 00:22:25.655 --rc geninfo_unexecuted_blocks=1 00:22:25.655 00:22:25.655 ' 00:22:25.655 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:25.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.655 --rc genhtml_branch_coverage=1 00:22:25.655 --rc genhtml_function_coverage=1 00:22:25.655 --rc genhtml_legend=1 00:22:25.655 --rc geninfo_all_blocks=1 00:22:25.655 --rc geninfo_unexecuted_blocks=1 00:22:25.655 00:22:25.655 ' 00:22:25.655 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:25.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.655 --rc genhtml_branch_coverage=1 00:22:25.655 --rc genhtml_function_coverage=1 00:22:25.655 --rc genhtml_legend=1 00:22:25.655 --rc geninfo_all_blocks=1 00:22:25.655 --rc geninfo_unexecuted_blocks=1 00:22:25.655 00:22:25.655 ' 00:22:25.655 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:25.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.655 --rc genhtml_branch_coverage=1 00:22:25.655 --rc genhtml_function_coverage=1 00:22:25.655 --rc genhtml_legend=1 00:22:25.655 --rc geninfo_all_blocks=1 00:22:25.655 --rc geninfo_unexecuted_blocks=1 00:22:25.655 00:22:25.655 ' 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:25.655 14:03:18 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.fnFksTOs7r 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80073 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:25.914 14:03:18 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80073 00:22:25.914 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 80073 ']' 00:22:25.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.914 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.914 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.914 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.914 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.914 14:03:18 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:25.914 [2024-12-11 14:03:18.823810] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:25.914 [2024-12-11 14:03:18.823963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80073 ] 00:22:26.175 [2024-12-11 14:03:19.004995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.175 [2024-12-11 14:03:19.110564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.111 14:03:19 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.111 14:03:19 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:22:27.111 14:03:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:27.111 14:03:19 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:27.111 14:03:19 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:27.111 14:03:19 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:27.111 14:03:19 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:27.111 14:03:19 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:27.370 14:03:20 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:27.370 14:03:20 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:27.370 14:03:20 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:27.370 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:27.370 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:27.370 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:27.370 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:27.370 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:27.630 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:27.630 { 00:22:27.630 "name": "nvme0n1", 00:22:27.630 "aliases": [ 00:22:27.630 "0adf1704-912b-4535-97d4-35ab33608146" 00:22:27.630 ], 00:22:27.630 "product_name": "NVMe disk", 00:22:27.630 "block_size": 4096, 00:22:27.630 "num_blocks": 1310720, 00:22:27.630 "uuid": "0adf1704-912b-4535-97d4-35ab33608146", 00:22:27.630 "numa_id": -1, 00:22:27.630 "assigned_rate_limits": { 00:22:27.630 "rw_ios_per_sec": 0, 00:22:27.630 "rw_mbytes_per_sec": 0, 00:22:27.630 "r_mbytes_per_sec": 0, 00:22:27.630 "w_mbytes_per_sec": 0 00:22:27.630 }, 00:22:27.630 "claimed": true, 00:22:27.630 "claim_type": "read_many_write_one", 00:22:27.630 "zoned": false, 00:22:27.630 "supported_io_types": { 00:22:27.630 "read": true, 00:22:27.630 "write": true, 00:22:27.630 "unmap": true, 00:22:27.630 "flush": true, 00:22:27.630 "reset": true, 00:22:27.630 "nvme_admin": true, 00:22:27.630 "nvme_io": true, 00:22:27.630 "nvme_io_md": false, 00:22:27.630 "write_zeroes": true, 00:22:27.630 "zcopy": false, 00:22:27.630 "get_zone_info": false, 00:22:27.630 "zone_management": false, 00:22:27.630 "zone_append": false, 00:22:27.630 "compare": true, 00:22:27.630 "compare_and_write": false, 00:22:27.630 "abort": true, 00:22:27.630 "seek_hole": false, 00:22:27.630 "seek_data": false, 00:22:27.630 "copy": true, 00:22:27.630 "nvme_iov_md": false 00:22:27.630 }, 00:22:27.630 "driver_specific": { 00:22:27.630 "nvme": [ 00:22:27.630 { 00:22:27.630 "pci_address": "0000:00:11.0", 00:22:27.630 "trid": { 00:22:27.630 "trtype": "PCIe", 00:22:27.630 "traddr": "0000:00:11.0" 00:22:27.630 }, 00:22:27.630 "ctrlr_data": { 00:22:27.630 "cntlid": 0, 00:22:27.630 "vendor_id": "0x1b36", 00:22:27.630 "model_number": "QEMU NVMe Ctrl", 00:22:27.630 "serial_number": "12341", 00:22:27.630 "firmware_revision": "8.0.0", 00:22:27.630 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:27.630 "oacs": { 00:22:27.630 "security": 0, 00:22:27.630 "format": 1, 00:22:27.630 "firmware": 0, 00:22:27.630 "ns_manage": 1 00:22:27.630 }, 00:22:27.630 "multi_ctrlr": false, 00:22:27.630 "ana_reporting": false 00:22:27.630 }, 00:22:27.630 "vs": { 00:22:27.630 "nvme_version": "1.4" 00:22:27.630 }, 00:22:27.630 "ns_data": { 00:22:27.630 "id": 1, 00:22:27.630 "can_share": false 00:22:27.630 } 00:22:27.630 } 00:22:27.630 ], 00:22:27.630 "mp_policy": "active_passive" 00:22:27.630 } 00:22:27.630 } 00:22:27.630 ]' 00:22:27.630 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:27.630 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:27.630 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:27.630 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:27.630 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:27.630 14:03:20 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:22:27.630 14:03:20 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:27.630 14:03:20 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:27.630 14:03:20 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:27.630 14:03:20 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:27.630 14:03:20 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:27.890 14:03:20 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=62daa9e1-bd5d-41b4-8276-589c7608d036 00:22:27.890 14:03:20 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:27.890 14:03:20 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62daa9e1-bd5d-41b4-8276-589c7608d036 00:22:27.890 14:03:20 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:28.149 14:03:21 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=a4292fb2-0ed5-4e0f-a547-22e00a2d84c7 00:22:28.149 14:03:21 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a4292fb2-0ed5-4e0f-a547-22e00a2d84c7 00:22:28.408 14:03:21 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:28.408 14:03:21 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:28.408 14:03:21 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:28.408 14:03:21 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:28.408 14:03:21 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:28.408 14:03:21 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:28.408 14:03:21 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:28.408 14:03:21 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:28.408 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:28.408 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:28.408 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:28.408 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:28.408 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:28.667 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:28.667 { 00:22:28.667 "name": "02ec844a-5f8f-4e8c-8ed9-c2ca417d688b", 00:22:28.667 "aliases": [ 00:22:28.667 "lvs/nvme0n1p0" 00:22:28.667 ], 00:22:28.667 "product_name": "Logical Volume", 00:22:28.667 "block_size": 4096, 00:22:28.667 "num_blocks": 26476544, 00:22:28.667 "uuid": "02ec844a-5f8f-4e8c-8ed9-c2ca417d688b", 00:22:28.667 "assigned_rate_limits": { 00:22:28.667 "rw_ios_per_sec": 0, 00:22:28.667 "rw_mbytes_per_sec": 0, 00:22:28.668 "r_mbytes_per_sec": 0, 00:22:28.668 "w_mbytes_per_sec": 0 00:22:28.668 }, 00:22:28.668 "claimed": false, 00:22:28.668 "zoned": false, 00:22:28.668 "supported_io_types": { 00:22:28.668 "read": true, 00:22:28.668 "write": true, 00:22:28.668 "unmap": true, 00:22:28.668 "flush": false, 00:22:28.668 "reset": true, 00:22:28.668 "nvme_admin": false, 00:22:28.668 "nvme_io": false, 00:22:28.668 "nvme_io_md": false, 00:22:28.668 "write_zeroes": true, 00:22:28.668 "zcopy": false, 00:22:28.668 "get_zone_info": false, 00:22:28.668 "zone_management": false, 00:22:28.668 "zone_append": false, 00:22:28.668 "compare": false, 00:22:28.668 "compare_and_write": false, 00:22:28.668 "abort": false, 00:22:28.668 "seek_hole": true, 00:22:28.668 "seek_data": true, 00:22:28.668 "copy": false, 00:22:28.668 "nvme_iov_md": false 00:22:28.668 }, 00:22:28.668 "driver_specific": { 00:22:28.668 "lvol": { 00:22:28.668 "lvol_store_uuid": "a4292fb2-0ed5-4e0f-a547-22e00a2d84c7", 00:22:28.668 "base_bdev": "nvme0n1", 00:22:28.668 "thin_provision": true, 00:22:28.668 "num_allocated_clusters": 0, 00:22:28.668 "snapshot": false, 00:22:28.668 "clone": false, 00:22:28.668 "esnap_clone": false 00:22:28.668 } 00:22:28.668 } 00:22:28.668 } 00:22:28.668 ]' 00:22:28.668 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:28.668 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:28.668 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:28.668 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:28.668 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:28.668 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:28.668 14:03:21 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:28.668 14:03:21 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:28.668 14:03:21 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:28.927 14:03:21 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:28.927 14:03:21 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:28.927 14:03:21 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:28.927 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:28.927 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:28.927 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:28.927 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:28.927 14:03:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:29.186 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:29.186 { 00:22:29.186 "name": "02ec844a-5f8f-4e8c-8ed9-c2ca417d688b", 00:22:29.186 "aliases": [ 00:22:29.186 "lvs/nvme0n1p0" 00:22:29.186 ], 00:22:29.186 "product_name": "Logical Volume", 00:22:29.186 "block_size": 4096, 00:22:29.186 "num_blocks": 26476544, 00:22:29.186 "uuid": "02ec844a-5f8f-4e8c-8ed9-c2ca417d688b", 00:22:29.186 "assigned_rate_limits": { 00:22:29.186 "rw_ios_per_sec": 0, 00:22:29.186 "rw_mbytes_per_sec": 0, 00:22:29.186 "r_mbytes_per_sec": 0, 00:22:29.186 "w_mbytes_per_sec": 0 00:22:29.186 }, 00:22:29.186 "claimed": false, 00:22:29.186 "zoned": false, 00:22:29.186 "supported_io_types": { 00:22:29.186 "read": true, 00:22:29.186 "write": true, 00:22:29.186 "unmap": true, 00:22:29.186 "flush": false, 00:22:29.186 "reset": true, 00:22:29.186 "nvme_admin": false, 00:22:29.186 "nvme_io": false, 00:22:29.186 "nvme_io_md": false, 00:22:29.186 "write_zeroes": true, 00:22:29.186 "zcopy": false, 00:22:29.186 "get_zone_info": false, 00:22:29.186 "zone_management": false, 00:22:29.186 "zone_append": false, 00:22:29.186 "compare": false, 00:22:29.186 "compare_and_write": false, 00:22:29.186 "abort": false, 00:22:29.186 "seek_hole": true, 00:22:29.186 "seek_data": true, 00:22:29.186 "copy": false, 00:22:29.186 "nvme_iov_md": false 00:22:29.186 }, 00:22:29.186 "driver_specific": { 00:22:29.186 "lvol": { 00:22:29.186 "lvol_store_uuid": "a4292fb2-0ed5-4e0f-a547-22e00a2d84c7", 00:22:29.186 "base_bdev": "nvme0n1", 00:22:29.186 "thin_provision": true, 00:22:29.186 "num_allocated_clusters": 0, 00:22:29.186 "snapshot": false, 00:22:29.186 "clone": false, 00:22:29.187 "esnap_clone": false 00:22:29.187 } 00:22:29.187 } 00:22:29.187 } 00:22:29.187 ]' 00:22:29.187 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:29.187 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:29.187 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:29.187 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:29.187 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:29.187 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:29.187 14:03:22 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:29.187 14:03:22 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:29.446 14:03:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:29.446 14:03:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:29.446 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:29.446 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:29.446 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:29.446 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:29.446 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 02ec844a-5f8f-4e8c-8ed9-c2ca417d688b 00:22:29.705 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:29.705 { 00:22:29.705 "name": "02ec844a-5f8f-4e8c-8ed9-c2ca417d688b", 00:22:29.705 "aliases": [ 00:22:29.705 "lvs/nvme0n1p0" 00:22:29.705 ], 00:22:29.705 "product_name": "Logical Volume", 00:22:29.705 "block_size": 4096, 00:22:29.705 "num_blocks": 26476544, 00:22:29.705 "uuid": "02ec844a-5f8f-4e8c-8ed9-c2ca417d688b", 00:22:29.705 "assigned_rate_limits": { 00:22:29.705 "rw_ios_per_sec": 0, 00:22:29.705 "rw_mbytes_per_sec": 0, 00:22:29.705 "r_mbytes_per_sec": 0, 00:22:29.705 "w_mbytes_per_sec": 0 00:22:29.705 }, 00:22:29.705 "claimed": false, 00:22:29.705 "zoned": false, 00:22:29.705 "supported_io_types": { 00:22:29.705 "read": true, 00:22:29.705 "write": true, 00:22:29.705 "unmap": true, 00:22:29.705 "flush": false, 00:22:29.705 "reset": true, 00:22:29.705 "nvme_admin": false, 00:22:29.705 "nvme_io": false, 00:22:29.705 "nvme_io_md": false, 00:22:29.705 "write_zeroes": true, 00:22:29.705 "zcopy": false, 00:22:29.705 "get_zone_info": false, 00:22:29.705 "zone_management": false, 00:22:29.705 "zone_append": false, 00:22:29.705 "compare": false, 00:22:29.705 "compare_and_write": false, 00:22:29.705 "abort": false, 00:22:29.705 "seek_hole": true, 00:22:29.705 "seek_data": true, 00:22:29.705 "copy": false, 00:22:29.705 "nvme_iov_md": false 00:22:29.705 }, 00:22:29.705 "driver_specific": { 00:22:29.705 "lvol": { 00:22:29.705 "lvol_store_uuid": "a4292fb2-0ed5-4e0f-a547-22e00a2d84c7", 00:22:29.705 "base_bdev": "nvme0n1", 00:22:29.705 "thin_provision": true, 00:22:29.705 "num_allocated_clusters": 0, 00:22:29.705 "snapshot": false, 00:22:29.705 "clone": false, 00:22:29.705 "esnap_clone": false 00:22:29.705 } 00:22:29.705 } 00:22:29.705 } 00:22:29.705 ]' 00:22:29.705 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:29.705 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:29.705 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:29.705 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:29.705 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:29.705 14:03:22 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:29.705 14:03:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:29.705 14:03:22 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 02ec844a-5f8f-4e8c-8ed9-c2ca417d688b --l2p_dram_limit 10' 00:22:29.705 14:03:22 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:29.705 14:03:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:29.705 14:03:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:29.705 14:03:22 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:29.705 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:29.705 14:03:22 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 02ec844a-5f8f-4e8c-8ed9-c2ca417d688b --l2p_dram_limit 10 -c nvc0n1p0 00:22:29.965 [2024-12-11 14:03:22.842372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.965 [2024-12-11 14:03:22.842575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:29.965 [2024-12-11 14:03:22.842770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:29.965 [2024-12-11 14:03:22.842816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.965 [2024-12-11 14:03:22.842943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.965 [2024-12-11 14:03:22.843083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.965 [2024-12-11 14:03:22.843134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:29.965 [2024-12-11 14:03:22.843167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.965 [2024-12-11 14:03:22.843233] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:29.965 [2024-12-11 14:03:22.844351] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:29.965 [2024-12-11 14:03:22.844525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.965 [2024-12-11 14:03:22.844619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.965 [2024-12-11 14:03:22.844735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:22:29.965 [2024-12-11 14:03:22.844777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.965 [2024-12-11 14:03:22.845025] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d0422185-0cc6-471c-ae4b-64140d5ed839 00:22:29.965 [2024-12-11 14:03:22.846605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.965 [2024-12-11 14:03:22.846759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:29.965 [2024-12-11 14:03:22.846864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:29.965 [2024-12-11 14:03:22.846998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.965 [2024-12-11 14:03:22.854573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.965 [2024-12-11 14:03:22.854728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.965 [2024-12-11 14:03:22.854923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.468 ms 00:22:29.965 [2024-12-11 14:03:22.854971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.965 [2024-12-11 14:03:22.855159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.965 [2024-12-11 14:03:22.855210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.965 [2024-12-11 14:03:22.855244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:29.965 [2024-12-11 14:03:22.855349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.965 [2024-12-11 14:03:22.855450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.965 [2024-12-11 14:03:22.855495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:29.965 [2024-12-11 14:03:22.855630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:29.965 [2024-12-11 14:03:22.855731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.965 [2024-12-11 14:03:22.855794] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:29.965 [2024-12-11 14:03:22.861178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.965 [2024-12-11 14:03:22.861320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.965 [2024-12-11 14:03:22.861351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.397 ms 00:22:29.965 [2024-12-11 14:03:22.861363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.965 [2024-12-11 14:03:22.861407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.965 [2024-12-11 14:03:22.861419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:29.965 [2024-12-11 14:03:22.861433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:29.965 [2024-12-11 14:03:22.861444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.966 [2024-12-11 14:03:22.861493] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:29.966 [2024-12-11 14:03:22.861631] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:29.966 [2024-12-11 14:03:22.861653] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:29.966 [2024-12-11 14:03:22.861668] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:29.966 [2024-12-11 14:03:22.861685] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:29.966 [2024-12-11 14:03:22.861697] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:29.966 [2024-12-11 14:03:22.861712] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:29.966 [2024-12-11 14:03:22.861724] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:29.966 [2024-12-11 14:03:22.861742] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:29.966 [2024-12-11 14:03:22.861752] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:29.966 [2024-12-11 14:03:22.861765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.966 [2024-12-11 14:03:22.861787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:29.966 [2024-12-11 14:03:22.861801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:22:29.966 [2024-12-11 14:03:22.861812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.966 [2024-12-11 14:03:22.861906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.966 [2024-12-11 14:03:22.861919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:29.966 [2024-12-11 14:03:22.861933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:29.966 [2024-12-11 14:03:22.861943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.966 [2024-12-11 14:03:22.862038] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:29.966 [2024-12-11 14:03:22.862051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:29.966 [2024-12-11 14:03:22.862065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.966 [2024-12-11 14:03:22.862076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:29.966 [2024-12-11 14:03:22.862118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:29.966 [2024-12-11 14:03:22.862153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:29.966 [2024-12-11 14:03:22.862175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.966 [2024-12-11 14:03:22.862201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:29.966 [2024-12-11 14:03:22.862212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:29.966 [2024-12-11 14:03:22.862226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.966 [2024-12-11 14:03:22.862236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:29.966 [2024-12-11 14:03:22.862250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:29.966 [2024-12-11 14:03:22.862259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:29.966 [2024-12-11 14:03:22.862284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:29.966 [2024-12-11 14:03:22.862297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:29.966 [2024-12-11 14:03:22.862319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.966 [2024-12-11 14:03:22.862341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:29.966 [2024-12-11 14:03:22.862350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.966 [2024-12-11 14:03:22.862373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:29.966 [2024-12-11 14:03:22.862385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.966 [2024-12-11 14:03:22.862406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:29.966 [2024-12-11 14:03:22.862416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.966 [2024-12-11 14:03:22.862437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:29.966 [2024-12-11 14:03:22.862452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.966 [2024-12-11 14:03:22.862474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:29.966 [2024-12-11 14:03:22.862484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:29.966 [2024-12-11 14:03:22.862495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.966 [2024-12-11 14:03:22.862505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:29.966 [2024-12-11 14:03:22.862519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:29.966 [2024-12-11 14:03:22.862535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:29.966 [2024-12-11 14:03:22.862569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:29.966 [2024-12-11 14:03:22.862581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862591] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:29.966 [2024-12-11 14:03:22.862604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:29.966 [2024-12-11 14:03:22.862615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.966 [2024-12-11 14:03:22.862629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.966 [2024-12-11 14:03:22.862640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:29.966 [2024-12-11 14:03:22.862656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:29.966 [2024-12-11 14:03:22.862666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:29.966 [2024-12-11 14:03:22.862678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:29.966 [2024-12-11 14:03:22.862688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:29.966 [2024-12-11 14:03:22.862701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:29.966 [2024-12-11 14:03:22.862713] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:29.966 [2024-12-11 14:03:22.862728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.966 [2024-12-11 14:03:22.862744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:29.966 [2024-12-11 14:03:22.862758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:29.966 [2024-12-11 14:03:22.862769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:29.966 [2024-12-11 14:03:22.862783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:29.966 [2024-12-11 14:03:22.862794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:29.966 [2024-12-11 14:03:22.862807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:29.966 [2024-12-11 14:03:22.862818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:29.966 [2024-12-11 14:03:22.862843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:29.966 [2024-12-11 14:03:22.862854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:29.966 [2024-12-11 14:03:22.862871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:29.966 [2024-12-11 14:03:22.862883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:29.966 [2024-12-11 14:03:22.862896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:29.966 [2024-12-11 14:03:22.862907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:29.966 [2024-12-11 14:03:22.862921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:29.966 [2024-12-11 14:03:22.862937] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:29.966 [2024-12-11 14:03:22.862959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.966 [2024-12-11 14:03:22.862976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:29.966 [2024-12-11 14:03:22.862998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:29.966 [2024-12-11 14:03:22.863014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:29.966 [2024-12-11 14:03:22.863051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:29.966 [2024-12-11 14:03:22.863063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.966 [2024-12-11 14:03:22.863077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:29.966 [2024-12-11 14:03:22.863088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:22:29.966 [2024-12-11 14:03:22.863103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.966 [2024-12-11 14:03:22.863150] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:29.966 [2024-12-11 14:03:22.863169] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:34.159 [2024-12-11 14:03:26.334483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.159 [2024-12-11 14:03:26.334540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:34.159 [2024-12-11 14:03:26.334559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3476.968 ms 00:22:34.159 [2024-12-11 14:03:26.334573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.159 [2024-12-11 14:03:26.372174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.159 [2024-12-11 14:03:26.372223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:34.159 [2024-12-11 14:03:26.372241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.322 ms 00:22:34.159 [2024-12-11 14:03:26.372254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.159 [2024-12-11 14:03:26.372386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.159 [2024-12-11 14:03:26.372401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:34.159 [2024-12-11 14:03:26.372413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:34.159 [2024-12-11 14:03:26.372433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.159 [2024-12-11 14:03:26.416062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.159 [2024-12-11 14:03:26.416109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:34.159 [2024-12-11 14:03:26.416123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.641 ms 00:22:34.159 [2024-12-11 14:03:26.416152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.159 [2024-12-11 14:03:26.416190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.159 [2024-12-11 14:03:26.416208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:34.159 [2024-12-11 14:03:26.416219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:34.159 [2024-12-11 14:03:26.416243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.159 [2024-12-11 14:03:26.416703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.159 [2024-12-11 14:03:26.416720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:34.159 [2024-12-11 14:03:26.416731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:22:34.159 [2024-12-11 14:03:26.416743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.159 [2024-12-11 14:03:26.416837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.416871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:34.160 [2024-12-11 14:03:26.416900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:34.160 [2024-12-11 14:03:26.416916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.436724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.436770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:34.160 [2024-12-11 14:03:26.436784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.819 ms 00:22:34.160 [2024-12-11 14:03:26.436813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.471322] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:34.160 [2024-12-11 14:03:26.474895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.474934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:34.160 [2024-12-11 14:03:26.474955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.020 ms 00:22:34.160 [2024-12-11 14:03:26.474968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.559784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.559852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:34.160 [2024-12-11 14:03:26.559873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.905 ms 00:22:34.160 [2024-12-11 14:03:26.559883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.560084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.560101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:34.160 [2024-12-11 14:03:26.560117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:22:34.160 [2024-12-11 14:03:26.560128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.594472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.594510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:34.160 [2024-12-11 14:03:26.594526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.344 ms 00:22:34.160 [2024-12-11 14:03:26.594536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.628992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.629027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:34.160 [2024-12-11 14:03:26.629055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.464 ms 00:22:34.160 [2024-12-11 14:03:26.629064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.629728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.629747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:34.160 [2024-12-11 14:03:26.629761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:22:34.160 [2024-12-11 14:03:26.629773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.724958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.724997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:34.160 [2024-12-11 14:03:26.725019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.284 ms 00:22:34.160 [2024-12-11 14:03:26.725030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.763414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.763453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:34.160 [2024-12-11 14:03:26.763480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.361 ms 00:22:34.160 [2024-12-11 14:03:26.763506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.797971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.798175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:34.160 [2024-12-11 14:03:26.798218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.475 ms 00:22:34.160 [2024-12-11 14:03:26.798228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.833303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.833470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:34.160 [2024-12-11 14:03:26.833497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.062 ms 00:22:34.160 [2024-12-11 14:03:26.833508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.833555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.833567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:34.160 [2024-12-11 14:03:26.833584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:34.160 [2024-12-11 14:03:26.833594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.833698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.160 [2024-12-11 14:03:26.833715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:34.160 [2024-12-11 14:03:26.833729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:34.160 [2024-12-11 14:03:26.833739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.160 [2024-12-11 14:03:26.834788] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3998.482 ms, result 0 00:22:34.160 { 00:22:34.160 "name": "ftl0", 00:22:34.160 "uuid": "d0422185-0cc6-471c-ae4b-64140d5ed839" 00:22:34.160 } 00:22:34.160 14:03:26 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:34.160 14:03:26 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:34.160 14:03:27 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:34.160 14:03:27 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:34.420 [2024-12-11 14:03:27.253414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.420 [2024-12-11 14:03:27.253473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:34.420 [2024-12-11 14:03:27.253490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:34.420 [2024-12-11 14:03:27.253502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.420 [2024-12-11 14:03:27.253528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:34.420 [2024-12-11 14:03:27.257708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.420 [2024-12-11 14:03:27.257872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:34.420 [2024-12-11 14:03:27.257933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.162 ms 00:22:34.420 [2024-12-11 14:03:27.257945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.420 [2024-12-11 14:03:27.258209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.420 [2024-12-11 14:03:27.258226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:34.420 [2024-12-11 14:03:27.258240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:22:34.420 [2024-12-11 14:03:27.258251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.420 [2024-12-11 14:03:27.260785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.420 [2024-12-11 14:03:27.260810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:34.420 [2024-12-11 14:03:27.260843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.518 ms 00:22:34.420 [2024-12-11 14:03:27.260854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.420 [2024-12-11 14:03:27.265712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.420 [2024-12-11 14:03:27.265747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:34.420 [2024-12-11 14:03:27.265765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.838 ms 00:22:34.420 [2024-12-11 14:03:27.265790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.420 [2024-12-11 14:03:27.300085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.420 [2024-12-11 14:03:27.300131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:34.420 [2024-12-11 14:03:27.300149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.254 ms 00:22:34.420 [2024-12-11 14:03:27.300158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.420 [2024-12-11 14:03:27.321189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.420 [2024-12-11 14:03:27.321352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:34.420 [2024-12-11 14:03:27.321379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.018 ms 00:22:34.420 [2024-12-11 14:03:27.321390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.420 [2024-12-11 14:03:27.321537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.420 [2024-12-11 14:03:27.321551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:34.420 [2024-12-11 14:03:27.321565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:34.420 [2024-12-11 14:03:27.321574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.420 [2024-12-11 14:03:27.356192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.420 [2024-12-11 14:03:27.356227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:34.420 [2024-12-11 14:03:27.356242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.648 ms 00:22:34.420 [2024-12-11 14:03:27.356252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.421 [2024-12-11 14:03:27.390401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.421 [2024-12-11 14:03:27.390437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:34.421 [2024-12-11 14:03:27.390452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.163 ms 00:22:34.421 [2024-12-11 14:03:27.390477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.421 [2024-12-11 14:03:27.424355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.421 [2024-12-11 14:03:27.424392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:34.421 [2024-12-11 14:03:27.424407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.885 ms 00:22:34.421 [2024-12-11 14:03:27.424417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.421 [2024-12-11 14:03:27.458451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.421 [2024-12-11 14:03:27.458488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:34.421 [2024-12-11 14:03:27.458504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.996 ms 00:22:34.421 [2024-12-11 14:03:27.458514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.421 [2024-12-11 14:03:27.458554] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:34.421 [2024-12-11 14:03:27.458570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.458998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:34.421 [2024-12-11 14:03:27.459640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:34.422 [2024-12-11 14:03:27.459963] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:34.422 [2024-12-11 14:03:27.459976] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d0422185-0cc6-471c-ae4b-64140d5ed839 00:22:34.422 [2024-12-11 14:03:27.459987] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:34.422 [2024-12-11 14:03:27.460003] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:34.422 [2024-12-11 14:03:27.460016] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:34.422 [2024-12-11 14:03:27.460029] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:34.422 [2024-12-11 14:03:27.460038] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:34.422 [2024-12-11 14:03:27.460051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:34.422 [2024-12-11 14:03:27.460061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:34.422 [2024-12-11 14:03:27.460076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:34.422 [2024-12-11 14:03:27.460092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:34.422 [2024-12-11 14:03:27.460113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.422 [2024-12-11 14:03:27.460130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:34.422 [2024-12-11 14:03:27.460148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:22:34.422 [2024-12-11 14:03:27.460162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.681 [2024-12-11 14:03:27.479915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.681 [2024-12-11 14:03:27.479950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:34.681 [2024-12-11 14:03:27.479966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.726 ms 00:22:34.681 [2024-12-11 14:03:27.479992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.681 [2024-12-11 14:03:27.480496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.681 [2024-12-11 14:03:27.480508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:34.681 [2024-12-11 14:03:27.480524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:22:34.681 [2024-12-11 14:03:27.480534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.681 [2024-12-11 14:03:27.545093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.681 [2024-12-11 14:03:27.545132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:34.681 [2024-12-11 14:03:27.545164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.681 [2024-12-11 14:03:27.545176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.681 [2024-12-11 14:03:27.545235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.681 [2024-12-11 14:03:27.545246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:34.681 [2024-12-11 14:03:27.545262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.681 [2024-12-11 14:03:27.545273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.681 [2024-12-11 14:03:27.545371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.681 [2024-12-11 14:03:27.545393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:34.681 [2024-12-11 14:03:27.545412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.681 [2024-12-11 14:03:27.545430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.681 [2024-12-11 14:03:27.545458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.681 [2024-12-11 14:03:27.545469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:34.681 [2024-12-11 14:03:27.545482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.681 [2024-12-11 14:03:27.545495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.681 [2024-12-11 14:03:27.666771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.681 [2024-12-11 14:03:27.667000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:34.681 [2024-12-11 14:03:27.667034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.681 [2024-12-11 14:03:27.667045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.941 [2024-12-11 14:03:27.760118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.941 [2024-12-11 14:03:27.760166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:34.941 [2024-12-11 14:03:27.760183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.941 [2024-12-11 14:03:27.760198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.941 [2024-12-11 14:03:27.760311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.941 [2024-12-11 14:03:27.760323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:34.941 [2024-12-11 14:03:27.760336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.941 [2024-12-11 14:03:27.760346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.941 [2024-12-11 14:03:27.760419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.941 [2024-12-11 14:03:27.760431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:34.941 [2024-12-11 14:03:27.760443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.941 [2024-12-11 14:03:27.760454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.941 [2024-12-11 14:03:27.760573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.941 [2024-12-11 14:03:27.760586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:34.941 [2024-12-11 14:03:27.760600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.941 [2024-12-11 14:03:27.760610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.941 [2024-12-11 14:03:27.760652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.941 [2024-12-11 14:03:27.760665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:34.941 [2024-12-11 14:03:27.760677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.941 [2024-12-11 14:03:27.760687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.941 [2024-12-11 14:03:27.760732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.941 [2024-12-11 14:03:27.760743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:34.941 [2024-12-11 14:03:27.760756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.941 [2024-12-11 14:03:27.760766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.941 [2024-12-11 14:03:27.760815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.941 [2024-12-11 14:03:27.760828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:34.941 [2024-12-11 14:03:27.760858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.941 [2024-12-11 14:03:27.760869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.941 [2024-12-11 14:03:27.761022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 508.374 ms, result 0 00:22:34.941 true 00:22:34.941 14:03:27 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80073 00:22:34.941 14:03:27 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 80073 ']' 00:22:34.941 14:03:27 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 80073 00:22:34.941 14:03:27 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:22:34.941 14:03:27 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.941 14:03:27 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80073 00:22:34.941 killing process with pid 80073 00:22:34.941 14:03:27 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.941 14:03:27 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.941 14:03:27 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80073' 00:22:34.941 14:03:27 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 80073 00:22:34.941 14:03:27 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 80073 00:22:40.240 14:03:32 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:44.432 262144+0 records in 00:22:44.432 262144+0 records out 00:22:44.432 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.96952 s, 270 MB/s 00:22:44.432 14:03:36 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:45.369 14:03:38 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:45.369 [2024-12-11 14:03:38.320245] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:45.369 [2024-12-11 14:03:38.320551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80309 ] 00:22:45.628 [2024-12-11 14:03:38.509920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.628 [2024-12-11 14:03:38.621648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.198 [2024-12-11 14:03:38.978422] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:46.198 [2024-12-11 14:03:38.978747] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:46.198 [2024-12-11 14:03:39.144609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.198 [2024-12-11 14:03:39.144663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:46.198 [2024-12-11 14:03:39.144677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:46.198 [2024-12-11 14:03:39.144687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.198 [2024-12-11 14:03:39.144732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.198 [2024-12-11 14:03:39.144750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:46.198 [2024-12-11 14:03:39.144760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:46.198 [2024-12-11 14:03:39.144768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.198 [2024-12-11 14:03:39.144789] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:46.198 [2024-12-11 14:03:39.145821] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:46.198 [2024-12-11 14:03:39.145853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.198 [2024-12-11 14:03:39.145864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:46.198 [2024-12-11 14:03:39.145875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:22:46.198 [2024-12-11 14:03:39.145885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.198 [2024-12-11 14:03:39.147381] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:46.198 [2024-12-11 14:03:39.165432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.198 [2024-12-11 14:03:39.165471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:46.198 [2024-12-11 14:03:39.165485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.081 ms 00:22:46.198 [2024-12-11 14:03:39.165511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.198 [2024-12-11 14:03:39.165591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.198 [2024-12-11 14:03:39.165604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:46.198 [2024-12-11 14:03:39.165614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:46.198 [2024-12-11 14:03:39.165624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.198 [2024-12-11 14:03:39.172244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.198 [2024-12-11 14:03:39.172432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:46.198 [2024-12-11 14:03:39.172453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.559 ms 00:22:46.198 [2024-12-11 14:03:39.172475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.198 [2024-12-11 14:03:39.172557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.198 [2024-12-11 14:03:39.172570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:46.198 [2024-12-11 14:03:39.172580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:46.198 [2024-12-11 14:03:39.172590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.198 [2024-12-11 14:03:39.172631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.198 [2024-12-11 14:03:39.172643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:46.198 [2024-12-11 14:03:39.172653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:46.198 [2024-12-11 14:03:39.172662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.198 [2024-12-11 14:03:39.172689] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:46.198 [2024-12-11 14:03:39.177447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.198 [2024-12-11 14:03:39.177479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:46.198 [2024-12-11 14:03:39.177494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:22:46.198 [2024-12-11 14:03:39.177520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.198 [2024-12-11 14:03:39.177552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.198 [2024-12-11 14:03:39.177564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:46.198 [2024-12-11 14:03:39.177574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:46.198 [2024-12-11 14:03:39.177583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.198 [2024-12-11 14:03:39.177633] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:46.198 [2024-12-11 14:03:39.177657] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:46.198 [2024-12-11 14:03:39.177690] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:46.198 [2024-12-11 14:03:39.177710] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:46.198 [2024-12-11 14:03:39.177796] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:46.198 [2024-12-11 14:03:39.177809] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:46.198 [2024-12-11 14:03:39.177821] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:46.198 [2024-12-11 14:03:39.177833] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:46.198 [2024-12-11 14:03:39.177863] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:46.198 [2024-12-11 14:03:39.177891] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:46.198 [2024-12-11 14:03:39.177901] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:46.198 [2024-12-11 14:03:39.177911] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:46.198 [2024-12-11 14:03:39.177924] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:46.199 [2024-12-11 14:03:39.177934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.199 [2024-12-11 14:03:39.177945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:46.199 [2024-12-11 14:03:39.177956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:22:46.199 [2024-12-11 14:03:39.177966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.199 [2024-12-11 14:03:39.178036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.199 [2024-12-11 14:03:39.178047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:46.199 [2024-12-11 14:03:39.178058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:46.199 [2024-12-11 14:03:39.178067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.199 [2024-12-11 14:03:39.178163] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:46.199 [2024-12-11 14:03:39.178177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:46.199 [2024-12-11 14:03:39.178187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:46.199 [2024-12-11 14:03:39.178197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:46.199 [2024-12-11 14:03:39.178217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:46.199 [2024-12-11 14:03:39.178237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:46.199 [2024-12-11 14:03:39.178246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:46.199 [2024-12-11 14:03:39.178266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:46.199 [2024-12-11 14:03:39.178276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:46.199 [2024-12-11 14:03:39.178285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:46.199 [2024-12-11 14:03:39.178305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:46.199 [2024-12-11 14:03:39.178315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:46.199 [2024-12-11 14:03:39.178324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:46.199 [2024-12-11 14:03:39.178343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:46.199 [2024-12-11 14:03:39.178352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:46.199 [2024-12-11 14:03:39.178371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.199 [2024-12-11 14:03:39.178390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:46.199 [2024-12-11 14:03:39.178399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.199 [2024-12-11 14:03:39.178417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:46.199 [2024-12-11 14:03:39.178426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.199 [2024-12-11 14:03:39.178445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:46.199 [2024-12-11 14:03:39.178454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:46.199 [2024-12-11 14:03:39.178472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:46.199 [2024-12-11 14:03:39.178481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:46.199 [2024-12-11 14:03:39.178500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:46.199 [2024-12-11 14:03:39.178509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:46.199 [2024-12-11 14:03:39.178517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:46.199 [2024-12-11 14:03:39.178527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:46.199 [2024-12-11 14:03:39.178535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:46.199 [2024-12-11 14:03:39.178544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:46.199 [2024-12-11 14:03:39.178563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:46.199 [2024-12-11 14:03:39.178574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178583] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:46.199 [2024-12-11 14:03:39.178593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:46.199 [2024-12-11 14:03:39.178603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:46.199 [2024-12-11 14:03:39.178612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.199 [2024-12-11 14:03:39.178622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:46.199 [2024-12-11 14:03:39.178632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:46.199 [2024-12-11 14:03:39.178641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:46.199 [2024-12-11 14:03:39.178650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:46.199 [2024-12-11 14:03:39.178659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:46.199 [2024-12-11 14:03:39.178669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:46.199 [2024-12-11 14:03:39.178680] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:46.199 [2024-12-11 14:03:39.178692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:46.199 [2024-12-11 14:03:39.178707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:46.199 [2024-12-11 14:03:39.178718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:46.199 [2024-12-11 14:03:39.178728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:46.199 [2024-12-11 14:03:39.178748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:46.199 [2024-12-11 14:03:39.178759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:46.199 [2024-12-11 14:03:39.178769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:46.199 [2024-12-11 14:03:39.178779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:46.199 [2024-12-11 14:03:39.178789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:46.199 [2024-12-11 14:03:39.178799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:46.199 [2024-12-11 14:03:39.178809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:46.199 [2024-12-11 14:03:39.178819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:46.199 [2024-12-11 14:03:39.178829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:46.199 [2024-12-11 14:03:39.178850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:46.199 [2024-12-11 14:03:39.178860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:46.199 [2024-12-11 14:03:39.178870] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:46.199 [2024-12-11 14:03:39.178881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:46.199 [2024-12-11 14:03:39.178893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:46.199 [2024-12-11 14:03:39.178903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:46.199 [2024-12-11 14:03:39.178913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:46.199 [2024-12-11 14:03:39.178924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:46.199 [2024-12-11 14:03:39.178936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.199 [2024-12-11 14:03:39.178946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:46.199 [2024-12-11 14:03:39.178955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:22:46.199 [2024-12-11 14:03:39.178965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.199 [2024-12-11 14:03:39.216779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.199 [2024-12-11 14:03:39.216994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:46.199 [2024-12-11 14:03:39.217129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.832 ms 00:22:46.199 [2024-12-11 14:03:39.217179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.199 [2024-12-11 14:03:39.217277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.199 [2024-12-11 14:03:39.217309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:46.199 [2024-12-11 14:03:39.217415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:46.199 [2024-12-11 14:03:39.217454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.291485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.291675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:46.459 [2024-12-11 14:03:39.291787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.063 ms 00:22:46.459 [2024-12-11 14:03:39.291839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.291918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.291953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:46.459 [2024-12-11 14:03:39.292237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:46.459 [2024-12-11 14:03:39.292280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.292837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.292970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:46.459 [2024-12-11 14:03:39.293076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:22:46.459 [2024-12-11 14:03:39.293117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.293288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.293333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:46.459 [2024-12-11 14:03:39.293431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:22:46.459 [2024-12-11 14:03:39.293471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.313979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.314140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:46.459 [2024-12-11 14:03:39.314260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.490 ms 00:22:46.459 [2024-12-11 14:03:39.314304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.333871] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:46.459 [2024-12-11 14:03:39.334059] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:46.459 [2024-12-11 14:03:39.334200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.334237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:46.459 [2024-12-11 14:03:39.334330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.782 ms 00:22:46.459 [2024-12-11 14:03:39.334369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.362208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.362362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:46.459 [2024-12-11 14:03:39.362498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.812 ms 00:22:46.459 [2024-12-11 14:03:39.362517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.379648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.379687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:46.459 [2024-12-11 14:03:39.379700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.054 ms 00:22:46.459 [2024-12-11 14:03:39.379726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.396964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.397013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:46.459 [2024-12-11 14:03:39.397025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.223 ms 00:22:46.459 [2024-12-11 14:03:39.397050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.397768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.397790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:46.459 [2024-12-11 14:03:39.397801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:22:46.459 [2024-12-11 14:03:39.397817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.478213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.478270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:46.459 [2024-12-11 14:03:39.478286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.456 ms 00:22:46.459 [2024-12-11 14:03:39.478318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.488491] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:46.459 [2024-12-11 14:03:39.490895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.490926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:46.459 [2024-12-11 14:03:39.490939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.550 ms 00:22:46.459 [2024-12-11 14:03:39.490949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.491025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.491038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:46.459 [2024-12-11 14:03:39.491049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:46.459 [2024-12-11 14:03:39.491058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.491129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.491141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:46.459 [2024-12-11 14:03:39.491162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:46.459 [2024-12-11 14:03:39.491172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.459 [2024-12-11 14:03:39.491192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.459 [2024-12-11 14:03:39.491203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:46.459 [2024-12-11 14:03:39.491212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:46.460 [2024-12-11 14:03:39.491221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.460 [2024-12-11 14:03:39.491255] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:46.460 [2024-12-11 14:03:39.491270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.460 [2024-12-11 14:03:39.491279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:46.460 [2024-12-11 14:03:39.491289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:46.460 [2024-12-11 14:03:39.491298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.718 [2024-12-11 14:03:39.525930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.719 [2024-12-11 14:03:39.526097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:46.719 [2024-12-11 14:03:39.526244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.670 ms 00:22:46.719 [2024-12-11 14:03:39.526294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.719 [2024-12-11 14:03:39.526384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.719 [2024-12-11 14:03:39.526421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:46.719 [2024-12-11 14:03:39.526516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:46.719 [2024-12-11 14:03:39.526555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.719 [2024-12-11 14:03:39.527646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.238 ms, result 0 00:22:47.663  [2024-12-11T14:03:41.654Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-11T14:03:42.591Z] Copying: 47/1024 [MB] (23 MBps) [2024-12-11T14:03:43.969Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-11T14:03:44.537Z] Copying: 94/1024 [MB] (23 MBps) [2024-12-11T14:03:45.914Z] Copying: 118/1024 [MB] (24 MBps) [2024-12-11T14:03:46.854Z] Copying: 142/1024 [MB] (23 MBps) [2024-12-11T14:03:47.791Z] Copying: 166/1024 [MB] (24 MBps) [2024-12-11T14:03:48.735Z] Copying: 190/1024 [MB] (23 MBps) [2024-12-11T14:03:49.672Z] Copying: 214/1024 [MB] (23 MBps) [2024-12-11T14:03:50.609Z] Copying: 237/1024 [MB] (23 MBps) [2024-12-11T14:03:51.546Z] Copying: 261/1024 [MB] (24 MBps) [2024-12-11T14:03:52.922Z] Copying: 286/1024 [MB] (24 MBps) [2024-12-11T14:03:53.859Z] Copying: 310/1024 [MB] (24 MBps) [2024-12-11T14:03:54.794Z] Copying: 334/1024 [MB] (24 MBps) [2024-12-11T14:03:55.730Z] Copying: 359/1024 [MB] (24 MBps) [2024-12-11T14:03:56.666Z] Copying: 384/1024 [MB] (25 MBps) [2024-12-11T14:03:57.603Z] Copying: 410/1024 [MB] (25 MBps) [2024-12-11T14:03:58.581Z] Copying: 436/1024 [MB] (25 MBps) [2024-12-11T14:03:59.518Z] Copying: 461/1024 [MB] (25 MBps) [2024-12-11T14:04:00.897Z] Copying: 485/1024 [MB] (24 MBps) [2024-12-11T14:04:01.832Z] Copying: 510/1024 [MB] (24 MBps) [2024-12-11T14:04:02.770Z] Copying: 534/1024 [MB] (24 MBps) [2024-12-11T14:04:03.708Z] Copying: 558/1024 [MB] (24 MBps) [2024-12-11T14:04:04.643Z] Copying: 583/1024 [MB] (24 MBps) [2024-12-11T14:04:05.591Z] Copying: 607/1024 [MB] (24 MBps) [2024-12-11T14:04:06.527Z] Copying: 631/1024 [MB] (24 MBps) [2024-12-11T14:04:07.521Z] Copying: 655/1024 [MB] (24 MBps) [2024-12-11T14:04:08.898Z] Copying: 681/1024 [MB] (25 MBps) [2024-12-11T14:04:09.834Z] Copying: 706/1024 [MB] (25 MBps) [2024-12-11T14:04:10.770Z] Copying: 731/1024 [MB] (25 MBps) [2024-12-11T14:04:11.706Z] Copying: 757/1024 [MB] (25 MBps) [2024-12-11T14:04:12.642Z] Copying: 782/1024 [MB] (25 MBps) [2024-12-11T14:04:13.578Z] Copying: 807/1024 [MB] (25 MBps) [2024-12-11T14:04:14.518Z] Copying: 831/1024 [MB] (23 MBps) [2024-12-11T14:04:15.901Z] Copying: 855/1024 [MB] (23 MBps) [2024-12-11T14:04:16.836Z] Copying: 879/1024 [MB] (23 MBps) [2024-12-11T14:04:17.770Z] Copying: 902/1024 [MB] (23 MBps) [2024-12-11T14:04:18.708Z] Copying: 926/1024 [MB] (23 MBps) [2024-12-11T14:04:19.645Z] Copying: 949/1024 [MB] (22 MBps) [2024-12-11T14:04:20.582Z] Copying: 973/1024 [MB] (23 MBps) [2024-12-11T14:04:21.538Z] Copying: 998/1024 [MB] (25 MBps) [2024-12-11T14:04:21.538Z] Copying: 1023/1024 [MB] (24 MBps) [2024-12-11T14:04:21.538Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-11 14:04:21.490971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.491 [2024-12-11 14:04:21.491178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:28.491 [2024-12-11 14:04:21.491304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:28.491 [2024-12-11 14:04:21.491420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.491 [2024-12-11 14:04:21.491464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:28.491 [2024-12-11 14:04:21.495800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.491 [2024-12-11 14:04:21.495845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:28.491 [2024-12-11 14:04:21.495861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.323 ms 00:23:28.491 [2024-12-11 14:04:21.495879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.491 [2024-12-11 14:04:21.497732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.491 [2024-12-11 14:04:21.497897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:28.491 [2024-12-11 14:04:21.497919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.829 ms 00:23:28.491 [2024-12-11 14:04:21.497931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.491 [2024-12-11 14:04:21.515341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.491 [2024-12-11 14:04:21.515413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:28.491 [2024-12-11 14:04:21.515428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.413 ms 00:23:28.491 [2024-12-11 14:04:21.515439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.491 [2024-12-11 14:04:21.520528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.491 [2024-12-11 14:04:21.520567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:28.491 [2024-12-11 14:04:21.520580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.052 ms 00:23:28.491 [2024-12-11 14:04:21.520590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.751 [2024-12-11 14:04:21.557627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.751 [2024-12-11 14:04:21.557687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:28.751 [2024-12-11 14:04:21.557703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.041 ms 00:23:28.751 [2024-12-11 14:04:21.557713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.751 [2024-12-11 14:04:21.579093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.751 [2024-12-11 14:04:21.579152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:28.751 [2024-12-11 14:04:21.579168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.361 ms 00:23:28.751 [2024-12-11 14:04:21.579180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.751 [2024-12-11 14:04:21.579351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.751 [2024-12-11 14:04:21.579373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:28.751 [2024-12-11 14:04:21.579385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:23:28.751 [2024-12-11 14:04:21.579395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.751 [2024-12-11 14:04:21.616230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.752 [2024-12-11 14:04:21.616295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:28.752 [2024-12-11 14:04:21.616310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.876 ms 00:23:28.752 [2024-12-11 14:04:21.616320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.752 [2024-12-11 14:04:21.652687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.752 [2024-12-11 14:04:21.652746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:28.752 [2024-12-11 14:04:21.652762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.380 ms 00:23:28.752 [2024-12-11 14:04:21.652772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.752 [2024-12-11 14:04:21.689102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.752 [2024-12-11 14:04:21.689165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:28.752 [2024-12-11 14:04:21.689182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.342 ms 00:23:28.752 [2024-12-11 14:04:21.689192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.752 [2024-12-11 14:04:21.725896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.752 [2024-12-11 14:04:21.725984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:28.752 [2024-12-11 14:04:21.726001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.661 ms 00:23:28.752 [2024-12-11 14:04:21.726011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.752 [2024-12-11 14:04:21.726064] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:28.752 [2024-12-11 14:04:21.726081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:28.752 [2024-12-11 14:04:21.726921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.726932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.726942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.726969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.726980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.726990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:28.753 [2024-12-11 14:04:21.727206] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:28.753 [2024-12-11 14:04:21.727221] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d0422185-0cc6-471c-ae4b-64140d5ed839 00:23:28.753 [2024-12-11 14:04:21.727232] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:28.753 [2024-12-11 14:04:21.727242] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:28.753 [2024-12-11 14:04:21.727251] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:28.753 [2024-12-11 14:04:21.727262] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:28.753 [2024-12-11 14:04:21.727272] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:28.753 [2024-12-11 14:04:21.727292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:28.753 [2024-12-11 14:04:21.727302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:28.753 [2024-12-11 14:04:21.727311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:28.753 [2024-12-11 14:04:21.727320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:28.753 [2024-12-11 14:04:21.727330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.753 [2024-12-11 14:04:21.727340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:28.753 [2024-12-11 14:04:21.727351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.270 ms 00:23:28.753 [2024-12-11 14:04:21.727361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.753 [2024-12-11 14:04:21.747137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.753 [2024-12-11 14:04:21.747189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:28.753 [2024-12-11 14:04:21.747204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.760 ms 00:23:28.753 [2024-12-11 14:04:21.747215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.753 [2024-12-11 14:04:21.747767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.753 [2024-12-11 14:04:21.747785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:28.753 [2024-12-11 14:04:21.747796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:23:28.753 [2024-12-11 14:04:21.747813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.012 [2024-12-11 14:04:21.800755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.012 [2024-12-11 14:04:21.800818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:29.012 [2024-12-11 14:04:21.800850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.012 [2024-12-11 14:04:21.800860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.012 [2024-12-11 14:04:21.800933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.012 [2024-12-11 14:04:21.800944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:29.012 [2024-12-11 14:04:21.800955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.012 [2024-12-11 14:04:21.800971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.012 [2024-12-11 14:04:21.801072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.012 [2024-12-11 14:04:21.801086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:29.012 [2024-12-11 14:04:21.801097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.012 [2024-12-11 14:04:21.801107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.012 [2024-12-11 14:04:21.801124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.012 [2024-12-11 14:04:21.801135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:29.013 [2024-12-11 14:04:21.801145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.013 [2024-12-11 14:04:21.801155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.013 [2024-12-11 14:04:21.927570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.013 [2024-12-11 14:04:21.927630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:29.013 [2024-12-11 14:04:21.927646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.013 [2024-12-11 14:04:21.927657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.013 [2024-12-11 14:04:22.030702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.013 [2024-12-11 14:04:22.031034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.013 [2024-12-11 14:04:22.031062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.013 [2024-12-11 14:04:22.031083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.013 [2024-12-11 14:04:22.031180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.013 [2024-12-11 14:04:22.031193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:29.013 [2024-12-11 14:04:22.031203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.013 [2024-12-11 14:04:22.031213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.013 [2024-12-11 14:04:22.031263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.013 [2024-12-11 14:04:22.031275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:29.013 [2024-12-11 14:04:22.031286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.013 [2024-12-11 14:04:22.031296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.013 [2024-12-11 14:04:22.031419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.013 [2024-12-11 14:04:22.031433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:29.013 [2024-12-11 14:04:22.031443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.013 [2024-12-11 14:04:22.031454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.013 [2024-12-11 14:04:22.031521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.013 [2024-12-11 14:04:22.031536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:29.013 [2024-12-11 14:04:22.031547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.013 [2024-12-11 14:04:22.031557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.013 [2024-12-11 14:04:22.031596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.013 [2024-12-11 14:04:22.031611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:29.013 [2024-12-11 14:04:22.031621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.013 [2024-12-11 14:04:22.031631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.013 [2024-12-11 14:04:22.031680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.013 [2024-12-11 14:04:22.031694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:29.013 [2024-12-11 14:04:22.031704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.013 [2024-12-11 14:04:22.031714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.013 [2024-12-11 14:04:22.031871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 541.725 ms, result 0 00:23:30.917 00:23:30.917 00:23:30.917 14:04:23 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:30.917 [2024-12-11 14:04:23.647005] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:23:30.917 [2024-12-11 14:04:23.647129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80766 ] 00:23:30.917 [2024-12-11 14:04:23.828408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.917 [2024-12-11 14:04:23.948669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.485 [2024-12-11 14:04:24.301544] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:31.485 [2024-12-11 14:04:24.301616] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:31.485 [2024-12-11 14:04:24.462536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.462604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:31.485 [2024-12-11 14:04:24.462620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:31.485 [2024-12-11 14:04:24.462631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.462687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.462702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:31.485 [2024-12-11 14:04:24.462713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:31.485 [2024-12-11 14:04:24.462724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.462745] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:31.485 [2024-12-11 14:04:24.463717] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:31.485 [2024-12-11 14:04:24.463751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.463762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:31.485 [2024-12-11 14:04:24.463775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:23:31.485 [2024-12-11 14:04:24.463785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.465298] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:31.485 [2024-12-11 14:04:24.484737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.484785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:31.485 [2024-12-11 14:04:24.484802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.470 ms 00:23:31.485 [2024-12-11 14:04:24.484814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.484916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.484929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:31.485 [2024-12-11 14:04:24.484940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:31.485 [2024-12-11 14:04:24.484951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.491982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.492021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:31.485 [2024-12-11 14:04:24.492034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.963 ms 00:23:31.485 [2024-12-11 14:04:24.492050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.492134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.492148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:31.485 [2024-12-11 14:04:24.492160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:31.485 [2024-12-11 14:04:24.492170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.492218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.492230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:31.485 [2024-12-11 14:04:24.492241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:31.485 [2024-12-11 14:04:24.492250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.492280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:31.485 [2024-12-11 14:04:24.496967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.497005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:31.485 [2024-12-11 14:04:24.497021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:23:31.485 [2024-12-11 14:04:24.497031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.497068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.497079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:31.485 [2024-12-11 14:04:24.497090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:31.485 [2024-12-11 14:04:24.497100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.497158] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:31.485 [2024-12-11 14:04:24.497183] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:31.485 [2024-12-11 14:04:24.497219] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:31.485 [2024-12-11 14:04:24.497241] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:31.485 [2024-12-11 14:04:24.497331] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:31.485 [2024-12-11 14:04:24.497344] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:31.485 [2024-12-11 14:04:24.497357] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:31.485 [2024-12-11 14:04:24.497370] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:31.485 [2024-12-11 14:04:24.497382] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:31.485 [2024-12-11 14:04:24.497394] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:31.485 [2024-12-11 14:04:24.497404] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:31.485 [2024-12-11 14:04:24.497415] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:31.485 [2024-12-11 14:04:24.497429] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:31.485 [2024-12-11 14:04:24.497439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.497450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:31.485 [2024-12-11 14:04:24.497460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:23:31.485 [2024-12-11 14:04:24.497470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.497541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.485 [2024-12-11 14:04:24.497552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:31.485 [2024-12-11 14:04:24.497562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:31.485 [2024-12-11 14:04:24.497573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.485 [2024-12-11 14:04:24.497663] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:31.485 [2024-12-11 14:04:24.497676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:31.485 [2024-12-11 14:04:24.497687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:31.485 [2024-12-11 14:04:24.497697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.485 [2024-12-11 14:04:24.497707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:31.485 [2024-12-11 14:04:24.497716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:31.485 [2024-12-11 14:04:24.497725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:31.486 [2024-12-11 14:04:24.497736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:31.486 [2024-12-11 14:04:24.497746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:31.486 [2024-12-11 14:04:24.497755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:31.486 [2024-12-11 14:04:24.497767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:31.486 [2024-12-11 14:04:24.497776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:31.486 [2024-12-11 14:04:24.497785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:31.486 [2024-12-11 14:04:24.497806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:31.486 [2024-12-11 14:04:24.497816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:31.486 [2024-12-11 14:04:24.498046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.486 [2024-12-11 14:04:24.498106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:31.486 [2024-12-11 14:04:24.498140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:31.486 [2024-12-11 14:04:24.498170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.486 [2024-12-11 14:04:24.498199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:31.486 [2024-12-11 14:04:24.498228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:31.486 [2024-12-11 14:04:24.498256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.486 [2024-12-11 14:04:24.498357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:31.486 [2024-12-11 14:04:24.498401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:31.486 [2024-12-11 14:04:24.498430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.486 [2024-12-11 14:04:24.498459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:31.486 [2024-12-11 14:04:24.498489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:31.486 [2024-12-11 14:04:24.498517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.486 [2024-12-11 14:04:24.498546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:31.486 [2024-12-11 14:04:24.498574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:31.486 [2024-12-11 14:04:24.498670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:31.486 [2024-12-11 14:04:24.498711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:31.486 [2024-12-11 14:04:24.498741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:31.486 [2024-12-11 14:04:24.498769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:31.486 [2024-12-11 14:04:24.498797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:31.486 [2024-12-11 14:04:24.498836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:31.486 [2024-12-11 14:04:24.498930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:31.486 [2024-12-11 14:04:24.498969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:31.486 [2024-12-11 14:04:24.498999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:31.486 [2024-12-11 14:04:24.499027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.486 [2024-12-11 14:04:24.499056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:31.486 [2024-12-11 14:04:24.499084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:31.486 [2024-12-11 14:04:24.499174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.486 [2024-12-11 14:04:24.499212] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:31.486 [2024-12-11 14:04:24.499225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:31.486 [2024-12-11 14:04:24.499235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:31.486 [2024-12-11 14:04:24.499246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:31.486 [2024-12-11 14:04:24.499256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:31.486 [2024-12-11 14:04:24.499266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:31.486 [2024-12-11 14:04:24.499275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:31.486 [2024-12-11 14:04:24.499285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:31.486 [2024-12-11 14:04:24.499294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:31.486 [2024-12-11 14:04:24.499303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:31.486 [2024-12-11 14:04:24.499315] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:31.486 [2024-12-11 14:04:24.499328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:31.486 [2024-12-11 14:04:24.499346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:31.486 [2024-12-11 14:04:24.499356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:31.486 [2024-12-11 14:04:24.499367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:31.486 [2024-12-11 14:04:24.499377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:31.486 [2024-12-11 14:04:24.499388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:31.486 [2024-12-11 14:04:24.499398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:31.486 [2024-12-11 14:04:24.499408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:31.486 [2024-12-11 14:04:24.499419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:31.486 [2024-12-11 14:04:24.499429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:31.486 [2024-12-11 14:04:24.499440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:31.486 [2024-12-11 14:04:24.499450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:31.486 [2024-12-11 14:04:24.499461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:31.486 [2024-12-11 14:04:24.499478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:31.486 [2024-12-11 14:04:24.499494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:31.486 [2024-12-11 14:04:24.499505] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:31.486 [2024-12-11 14:04:24.499517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:31.486 [2024-12-11 14:04:24.499528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:31.486 [2024-12-11 14:04:24.499540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:31.486 [2024-12-11 14:04:24.499551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:31.486 [2024-12-11 14:04:24.499562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:31.486 [2024-12-11 14:04:24.499576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.486 [2024-12-11 14:04:24.499587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:31.486 [2024-12-11 14:04:24.499598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.970 ms 00:23:31.486 [2024-12-11 14:04:24.499608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.745 [2024-12-11 14:04:24.539783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.745 [2024-12-11 14:04:24.539853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:31.745 [2024-12-11 14:04:24.539870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.179 ms 00:23:31.746 [2024-12-11 14:04:24.539885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.539983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.539995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:31.746 [2024-12-11 14:04:24.540006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:31.746 [2024-12-11 14:04:24.540017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.592910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.592969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:31.746 [2024-12-11 14:04:24.592985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.896 ms 00:23:31.746 [2024-12-11 14:04:24.592996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.593057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.593069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:31.746 [2024-12-11 14:04:24.593084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:31.746 [2024-12-11 14:04:24.593094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.593602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.593618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:31.746 [2024-12-11 14:04:24.593629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:23:31.746 [2024-12-11 14:04:24.593641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.593763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.593776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:31.746 [2024-12-11 14:04:24.593791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:23:31.746 [2024-12-11 14:04:24.593801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.614436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.614493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:31.746 [2024-12-11 14:04:24.614509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.645 ms 00:23:31.746 [2024-12-11 14:04:24.614519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.633877] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:31.746 [2024-12-11 14:04:24.634156] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:31.746 [2024-12-11 14:04:24.634189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.634207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:31.746 [2024-12-11 14:04:24.634227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.557 ms 00:23:31.746 [2024-12-11 14:04:24.634242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.664792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.664871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:31.746 [2024-12-11 14:04:24.664888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.536 ms 00:23:31.746 [2024-12-11 14:04:24.664899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.684021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.684265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:31.746 [2024-12-11 14:04:24.684291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.049 ms 00:23:31.746 [2024-12-11 14:04:24.684302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.702799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.702860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:31.746 [2024-12-11 14:04:24.702875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.457 ms 00:23:31.746 [2024-12-11 14:04:24.702886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.703702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.703729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:31.746 [2024-12-11 14:04:24.703745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:23:31.746 [2024-12-11 14:04:24.703755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.746 [2024-12-11 14:04:24.787815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.746 [2024-12-11 14:04:24.788124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:31.746 [2024-12-11 14:04:24.788162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.168 ms 00:23:31.746 [2024-12-11 14:04:24.788174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.005 [2024-12-11 14:04:24.800124] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:32.005 [2024-12-11 14:04:24.803375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.005 [2024-12-11 14:04:24.803544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:32.005 [2024-12-11 14:04:24.803569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.166 ms 00:23:32.005 [2024-12-11 14:04:24.803580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.005 [2024-12-11 14:04:24.803692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.005 [2024-12-11 14:04:24.803706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:32.005 [2024-12-11 14:04:24.803719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:32.005 [2024-12-11 14:04:24.803733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.005 [2024-12-11 14:04:24.803841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.005 [2024-12-11 14:04:24.803856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:32.005 [2024-12-11 14:04:24.803867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:32.005 [2024-12-11 14:04:24.803877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.005 [2024-12-11 14:04:24.803903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.005 [2024-12-11 14:04:24.803915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:32.005 [2024-12-11 14:04:24.803926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:32.005 [2024-12-11 14:04:24.803935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.005 [2024-12-11 14:04:24.803974] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:32.005 [2024-12-11 14:04:24.803987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.005 [2024-12-11 14:04:24.803997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:32.005 [2024-12-11 14:04:24.804007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:32.005 [2024-12-11 14:04:24.804017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.005 [2024-12-11 14:04:24.841836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.005 [2024-12-11 14:04:24.842113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:32.005 [2024-12-11 14:04:24.842147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.859 ms 00:23:32.005 [2024-12-11 14:04:24.842159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.005 [2024-12-11 14:04:24.842267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.005 [2024-12-11 14:04:24.842282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:32.005 [2024-12-11 14:04:24.842294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:32.005 [2024-12-11 14:04:24.842305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.005 [2024-12-11 14:04:24.843483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 381.116 ms, result 0 00:23:33.379  [2024-12-11T14:04:27.362Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-11T14:04:28.306Z] Copying: 52/1024 [MB] (26 MBps) [2024-12-11T14:04:29.243Z] Copying: 77/1024 [MB] (25 MBps) [2024-12-11T14:04:30.179Z] Copying: 103/1024 [MB] (25 MBps) [2024-12-11T14:04:31.114Z] Copying: 128/1024 [MB] (25 MBps) [2024-12-11T14:04:32.490Z] Copying: 154/1024 [MB] (25 MBps) [2024-12-11T14:04:33.058Z] Copying: 179/1024 [MB] (25 MBps) [2024-12-11T14:04:34.435Z] Copying: 204/1024 [MB] (25 MBps) [2024-12-11T14:04:35.371Z] Copying: 230/1024 [MB] (25 MBps) [2024-12-11T14:04:36.307Z] Copying: 255/1024 [MB] (25 MBps) [2024-12-11T14:04:37.245Z] Copying: 281/1024 [MB] (25 MBps) [2024-12-11T14:04:38.182Z] Copying: 306/1024 [MB] (25 MBps) [2024-12-11T14:04:39.119Z] Copying: 332/1024 [MB] (25 MBps) [2024-12-11T14:04:40.055Z] Copying: 357/1024 [MB] (25 MBps) [2024-12-11T14:04:41.432Z] Copying: 382/1024 [MB] (25 MBps) [2024-12-11T14:04:42.369Z] Copying: 408/1024 [MB] (25 MBps) [2024-12-11T14:04:43.305Z] Copying: 433/1024 [MB] (25 MBps) [2024-12-11T14:04:44.252Z] Copying: 459/1024 [MB] (25 MBps) [2024-12-11T14:04:45.229Z] Copying: 485/1024 [MB] (25 MBps) [2024-12-11T14:04:46.171Z] Copying: 510/1024 [MB] (25 MBps) [2024-12-11T14:04:47.110Z] Copying: 536/1024 [MB] (25 MBps) [2024-12-11T14:04:48.047Z] Copying: 561/1024 [MB] (25 MBps) [2024-12-11T14:04:49.425Z] Copying: 587/1024 [MB] (25 MBps) [2024-12-11T14:04:50.362Z] Copying: 612/1024 [MB] (25 MBps) [2024-12-11T14:04:51.299Z] Copying: 638/1024 [MB] (25 MBps) [2024-12-11T14:04:52.238Z] Copying: 663/1024 [MB] (25 MBps) [2024-12-11T14:04:53.175Z] Copying: 688/1024 [MB] (25 MBps) [2024-12-11T14:04:54.112Z] Copying: 714/1024 [MB] (25 MBps) [2024-12-11T14:04:55.048Z] Copying: 739/1024 [MB] (25 MBps) [2024-12-11T14:04:56.424Z] Copying: 765/1024 [MB] (25 MBps) [2024-12-11T14:04:57.361Z] Copying: 790/1024 [MB] (24 MBps) [2024-12-11T14:04:58.299Z] Copying: 815/1024 [MB] (25 MBps) [2024-12-11T14:04:59.236Z] Copying: 840/1024 [MB] (25 MBps) [2024-12-11T14:05:00.173Z] Copying: 865/1024 [MB] (24 MBps) [2024-12-11T14:05:01.110Z] Copying: 890/1024 [MB] (25 MBps) [2024-12-11T14:05:02.048Z] Copying: 916/1024 [MB] (26 MBps) [2024-12-11T14:05:03.426Z] Copying: 942/1024 [MB] (25 MBps) [2024-12-11T14:05:04.364Z] Copying: 967/1024 [MB] (25 MBps) [2024-12-11T14:05:05.312Z] Copying: 993/1024 [MB] (25 MBps) [2024-12-11T14:05:05.312Z] Copying: 1019/1024 [MB] (25 MBps) [2024-12-11T14:05:05.884Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-11 14:05:05.803498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.837 [2024-12-11 14:05:05.803572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:12.837 [2024-12-11 14:05:05.803595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:12.837 [2024-12-11 14:05:05.803611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.837 [2024-12-11 14:05:05.803644] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:12.837 [2024-12-11 14:05:05.809955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.837 [2024-12-11 14:05:05.810017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:12.837 [2024-12-11 14:05:05.810036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.293 ms 00:24:12.837 [2024-12-11 14:05:05.810051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.837 [2024-12-11 14:05:05.810374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.837 [2024-12-11 14:05:05.810398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:12.837 [2024-12-11 14:05:05.810413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:24:12.837 [2024-12-11 14:05:05.810428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.837 [2024-12-11 14:05:05.813332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.837 [2024-12-11 14:05:05.813360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:12.837 [2024-12-11 14:05:05.813372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.887 ms 00:24:12.837 [2024-12-11 14:05:05.813387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.837 [2024-12-11 14:05:05.818852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.837 [2024-12-11 14:05:05.818909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:12.837 [2024-12-11 14:05:05.818922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.451 ms 00:24:12.837 [2024-12-11 14:05:05.818932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.837 [2024-12-11 14:05:05.857276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.837 [2024-12-11 14:05:05.857323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:12.837 [2024-12-11 14:05:05.857339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.336 ms 00:24:12.837 [2024-12-11 14:05:05.857350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.837 [2024-12-11 14:05:05.878017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.837 [2024-12-11 14:05:05.878060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:12.837 [2024-12-11 14:05:05.878076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.650 ms 00:24:12.837 [2024-12-11 14:05:05.878088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.837 [2024-12-11 14:05:05.878257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.837 [2024-12-11 14:05:05.878275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:12.837 [2024-12-11 14:05:05.878286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:24:12.837 [2024-12-11 14:05:05.878296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.097 [2024-12-11 14:05:05.914802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.097 [2024-12-11 14:05:05.914853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:13.097 [2024-12-11 14:05:05.914869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.546 ms 00:24:13.097 [2024-12-11 14:05:05.914879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.097 [2024-12-11 14:05:05.951442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.097 [2024-12-11 14:05:05.951485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:13.097 [2024-12-11 14:05:05.951501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.577 ms 00:24:13.097 [2024-12-11 14:05:05.951528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.097 [2024-12-11 14:05:05.986819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.097 [2024-12-11 14:05:05.986870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:13.097 [2024-12-11 14:05:05.986885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.300 ms 00:24:13.097 [2024-12-11 14:05:05.986896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.097 [2024-12-11 14:05:06.023127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.097 [2024-12-11 14:05:06.023174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:13.097 [2024-12-11 14:05:06.023190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.203 ms 00:24:13.097 [2024-12-11 14:05:06.023201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.098 [2024-12-11 14:05:06.023241] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:13.098 [2024-12-11 14:05:06.023265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.023992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:13.098 [2024-12-11 14:05:06.024225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:13.099 [2024-12-11 14:05:06.024359] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:13.099 [2024-12-11 14:05:06.024369] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d0422185-0cc6-471c-ae4b-64140d5ed839 00:24:13.099 [2024-12-11 14:05:06.024380] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:13.099 [2024-12-11 14:05:06.024390] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:13.099 [2024-12-11 14:05:06.024400] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:13.099 [2024-12-11 14:05:06.024410] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:13.099 [2024-12-11 14:05:06.024441] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:13.099 [2024-12-11 14:05:06.024451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:13.099 [2024-12-11 14:05:06.024461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:13.099 [2024-12-11 14:05:06.024470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:13.099 [2024-12-11 14:05:06.024480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:13.099 [2024-12-11 14:05:06.024490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.099 [2024-12-11 14:05:06.024500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:13.099 [2024-12-11 14:05:06.024512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:24:13.099 [2024-12-11 14:05:06.024529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.099 [2024-12-11 14:05:06.044812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.099 [2024-12-11 14:05:06.045004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:13.099 [2024-12-11 14:05:06.045159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.257 ms 00:24:13.099 [2024-12-11 14:05:06.045201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.099 [2024-12-11 14:05:06.045837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.099 [2024-12-11 14:05:06.045960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:13.099 [2024-12-11 14:05:06.046068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:24:13.099 [2024-12-11 14:05:06.046118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.099 [2024-12-11 14:05:06.098464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.099 [2024-12-11 14:05:06.098678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:13.099 [2024-12-11 14:05:06.098852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.099 [2024-12-11 14:05:06.098896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.099 [2024-12-11 14:05:06.098983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.099 [2024-12-11 14:05:06.099210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:13.099 [2024-12-11 14:05:06.099261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.099 [2024-12-11 14:05:06.099292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.099 [2024-12-11 14:05:06.099396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.099 [2024-12-11 14:05:06.099583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:13.099 [2024-12-11 14:05:06.099626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.099 [2024-12-11 14:05:06.099657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.099 [2024-12-11 14:05:06.099701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.099 [2024-12-11 14:05:06.099791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:13.099 [2024-12-11 14:05:06.099817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.099 [2024-12-11 14:05:06.099852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.359 [2024-12-11 14:05:06.224979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.359 [2024-12-11 14:05:06.225042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:13.359 [2024-12-11 14:05:06.225059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.359 [2024-12-11 14:05:06.225070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.359 [2024-12-11 14:05:06.326405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.359 [2024-12-11 14:05:06.326458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:13.359 [2024-12-11 14:05:06.326478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.359 [2024-12-11 14:05:06.326488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.359 [2024-12-11 14:05:06.326577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.359 [2024-12-11 14:05:06.326590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:13.359 [2024-12-11 14:05:06.326601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.359 [2024-12-11 14:05:06.326611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.359 [2024-12-11 14:05:06.326665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.359 [2024-12-11 14:05:06.326686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:13.359 [2024-12-11 14:05:06.326703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.359 [2024-12-11 14:05:06.326717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.359 [2024-12-11 14:05:06.326862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.359 [2024-12-11 14:05:06.326884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:13.359 [2024-12-11 14:05:06.326901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.359 [2024-12-11 14:05:06.326918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.359 [2024-12-11 14:05:06.326971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.359 [2024-12-11 14:05:06.326984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:13.359 [2024-12-11 14:05:06.326995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.359 [2024-12-11 14:05:06.327006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.359 [2024-12-11 14:05:06.327060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.359 [2024-12-11 14:05:06.327078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:13.359 [2024-12-11 14:05:06.327096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.359 [2024-12-11 14:05:06.327106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.359 [2024-12-11 14:05:06.327150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.359 [2024-12-11 14:05:06.327162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:13.359 [2024-12-11 14:05:06.327172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.359 [2024-12-11 14:05:06.327184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.359 [2024-12-11 14:05:06.327313] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.656 ms, result 0 00:24:14.738 00:24:14.738 00:24:14.738 14:05:07 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:16.118 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:16.118 14:05:09 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:16.377 [2024-12-11 14:05:09.179698] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:24:16.377 [2024-12-11 14:05:09.179862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81224 ] 00:24:16.377 [2024-12-11 14:05:09.356968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.636 [2024-12-11 14:05:09.471389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.895 [2024-12-11 14:05:09.848457] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:16.895 [2024-12-11 14:05:09.848536] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:17.157 [2024-12-11 14:05:10.009158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.157 [2024-12-11 14:05:10.009222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:17.157 [2024-12-11 14:05:10.009237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:17.157 [2024-12-11 14:05:10.009249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.157 [2024-12-11 14:05:10.009300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.157 [2024-12-11 14:05:10.009316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:17.157 [2024-12-11 14:05:10.009326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:17.157 [2024-12-11 14:05:10.009336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.157 [2024-12-11 14:05:10.009358] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:17.157 [2024-12-11 14:05:10.010351] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:17.157 [2024-12-11 14:05:10.010375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.157 [2024-12-11 14:05:10.010386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:17.157 [2024-12-11 14:05:10.010397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:24:17.157 [2024-12-11 14:05:10.010408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.157 [2024-12-11 14:05:10.011880] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:17.157 [2024-12-11 14:05:10.030898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.157 [2024-12-11 14:05:10.030940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:17.157 [2024-12-11 14:05:10.030954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.049 ms 00:24:17.157 [2024-12-11 14:05:10.030965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.157 [2024-12-11 14:05:10.031036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.157 [2024-12-11 14:05:10.031050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:17.157 [2024-12-11 14:05:10.031061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:17.157 [2024-12-11 14:05:10.031071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.157 [2024-12-11 14:05:10.037863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.157 [2024-12-11 14:05:10.037900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:17.157 [2024-12-11 14:05:10.037912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.729 ms 00:24:17.157 [2024-12-11 14:05:10.037943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.157 [2024-12-11 14:05:10.038025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.157 [2024-12-11 14:05:10.038040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:17.157 [2024-12-11 14:05:10.038051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:17.157 [2024-12-11 14:05:10.038061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.157 [2024-12-11 14:05:10.038114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.157 [2024-12-11 14:05:10.038127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:17.157 [2024-12-11 14:05:10.038137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:17.157 [2024-12-11 14:05:10.038147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.157 [2024-12-11 14:05:10.038177] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:17.157 [2024-12-11 14:05:10.042964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.157 [2024-12-11 14:05:10.042997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:17.157 [2024-12-11 14:05:10.043013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.800 ms 00:24:17.157 [2024-12-11 14:05:10.043024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.157 [2024-12-11 14:05:10.043058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.157 [2024-12-11 14:05:10.043070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:17.158 [2024-12-11 14:05:10.043080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:17.158 [2024-12-11 14:05:10.043090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.158 [2024-12-11 14:05:10.043149] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:17.158 [2024-12-11 14:05:10.043175] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:17.158 [2024-12-11 14:05:10.043220] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:17.158 [2024-12-11 14:05:10.043241] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:17.158 [2024-12-11 14:05:10.043330] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:17.158 [2024-12-11 14:05:10.043343] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:17.158 [2024-12-11 14:05:10.043356] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:17.158 [2024-12-11 14:05:10.043369] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:17.158 [2024-12-11 14:05:10.043381] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:17.158 [2024-12-11 14:05:10.043392] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:17.158 [2024-12-11 14:05:10.043402] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:17.158 [2024-12-11 14:05:10.043412] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:17.158 [2024-12-11 14:05:10.043425] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:17.158 [2024-12-11 14:05:10.043435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.158 [2024-12-11 14:05:10.043445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:17.158 [2024-12-11 14:05:10.043456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:24:17.158 [2024-12-11 14:05:10.043466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.158 [2024-12-11 14:05:10.043539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.158 [2024-12-11 14:05:10.043550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:17.158 [2024-12-11 14:05:10.043560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:17.158 [2024-12-11 14:05:10.043570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.158 [2024-12-11 14:05:10.043662] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:17.158 [2024-12-11 14:05:10.043676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:17.158 [2024-12-11 14:05:10.043686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:17.158 [2024-12-11 14:05:10.043697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.158 [2024-12-11 14:05:10.043707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:17.158 [2024-12-11 14:05:10.043716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:17.158 [2024-12-11 14:05:10.043726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:17.158 [2024-12-11 14:05:10.043736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:17.158 [2024-12-11 14:05:10.043746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:17.158 [2024-12-11 14:05:10.043755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:17.158 [2024-12-11 14:05:10.043766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:17.158 [2024-12-11 14:05:10.043776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:17.158 [2024-12-11 14:05:10.043785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:17.158 [2024-12-11 14:05:10.043805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:17.158 [2024-12-11 14:05:10.043815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:17.158 [2024-12-11 14:05:10.043844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.158 [2024-12-11 14:05:10.043855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:17.158 [2024-12-11 14:05:10.043864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:17.158 [2024-12-11 14:05:10.043874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.158 [2024-12-11 14:05:10.043884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:17.158 [2024-12-11 14:05:10.043893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:17.158 [2024-12-11 14:05:10.043903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.158 [2024-12-11 14:05:10.043912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:17.158 [2024-12-11 14:05:10.043921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:17.158 [2024-12-11 14:05:10.043930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.158 [2024-12-11 14:05:10.043939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:17.158 [2024-12-11 14:05:10.043948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:17.158 [2024-12-11 14:05:10.043957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.158 [2024-12-11 14:05:10.043966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:17.158 [2024-12-11 14:05:10.043975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:17.158 [2024-12-11 14:05:10.043984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:17.158 [2024-12-11 14:05:10.043993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:17.158 [2024-12-11 14:05:10.044002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:17.158 [2024-12-11 14:05:10.044011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:17.158 [2024-12-11 14:05:10.044020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:17.158 [2024-12-11 14:05:10.044037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:17.158 [2024-12-11 14:05:10.044046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:17.158 [2024-12-11 14:05:10.044055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:17.158 [2024-12-11 14:05:10.044065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:17.158 [2024-12-11 14:05:10.044073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.158 [2024-12-11 14:05:10.044083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:17.158 [2024-12-11 14:05:10.044092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:17.158 [2024-12-11 14:05:10.044102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.158 [2024-12-11 14:05:10.044111] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:17.158 [2024-12-11 14:05:10.044121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:17.158 [2024-12-11 14:05:10.044131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:17.158 [2024-12-11 14:05:10.044141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:17.158 [2024-12-11 14:05:10.044151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:17.158 [2024-12-11 14:05:10.044160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:17.158 [2024-12-11 14:05:10.044169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:17.158 [2024-12-11 14:05:10.044179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:17.158 [2024-12-11 14:05:10.044188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:17.158 [2024-12-11 14:05:10.044198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:17.158 [2024-12-11 14:05:10.044209] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:17.158 [2024-12-11 14:05:10.044221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:17.158 [2024-12-11 14:05:10.044237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:17.158 [2024-12-11 14:05:10.044247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:17.158 [2024-12-11 14:05:10.044261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:17.158 [2024-12-11 14:05:10.044277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:17.158 [2024-12-11 14:05:10.044293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:17.158 [2024-12-11 14:05:10.044305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:17.158 [2024-12-11 14:05:10.044321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:17.158 [2024-12-11 14:05:10.044338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:17.158 [2024-12-11 14:05:10.044352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:17.158 [2024-12-11 14:05:10.044367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:17.158 [2024-12-11 14:05:10.044384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:17.158 [2024-12-11 14:05:10.044403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:17.158 [2024-12-11 14:05:10.044420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:17.158 [2024-12-11 14:05:10.044439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:17.158 [2024-12-11 14:05:10.044452] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:17.158 [2024-12-11 14:05:10.044464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:17.158 [2024-12-11 14:05:10.044476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:17.158 [2024-12-11 14:05:10.044487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:17.158 [2024-12-11 14:05:10.044497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:17.158 [2024-12-11 14:05:10.044508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:17.158 [2024-12-11 14:05:10.044520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.158 [2024-12-11 14:05:10.044531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:17.158 [2024-12-11 14:05:10.044542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:24:17.159 [2024-12-11 14:05:10.044552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.159 [2024-12-11 14:05:10.083963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.159 [2024-12-11 14:05:10.084018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:17.159 [2024-12-11 14:05:10.084035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.414 ms 00:24:17.159 [2024-12-11 14:05:10.084050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.159 [2024-12-11 14:05:10.084148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.159 [2024-12-11 14:05:10.084160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:17.159 [2024-12-11 14:05:10.084171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:17.159 [2024-12-11 14:05:10.084181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.159 [2024-12-11 14:05:10.137131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.159 [2024-12-11 14:05:10.137181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:17.159 [2024-12-11 14:05:10.137197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.953 ms 00:24:17.159 [2024-12-11 14:05:10.137224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.159 [2024-12-11 14:05:10.137281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.159 [2024-12-11 14:05:10.137293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:17.159 [2024-12-11 14:05:10.137308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:17.159 [2024-12-11 14:05:10.137319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.159 [2024-12-11 14:05:10.137816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.159 [2024-12-11 14:05:10.137831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:17.159 [2024-12-11 14:05:10.138067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:24:17.159 [2024-12-11 14:05:10.138125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.159 [2024-12-11 14:05:10.138288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.159 [2024-12-11 14:05:10.138405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:17.159 [2024-12-11 14:05:10.138458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:17.159 [2024-12-11 14:05:10.138488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.159 [2024-12-11 14:05:10.157267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.159 [2024-12-11 14:05:10.157466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:17.159 [2024-12-11 14:05:10.157614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.701 ms 00:24:17.159 [2024-12-11 14:05:10.157657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.159 [2024-12-11 14:05:10.176717] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:17.159 [2024-12-11 14:05:10.176757] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:17.159 [2024-12-11 14:05:10.176773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.159 [2024-12-11 14:05:10.176784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:17.159 [2024-12-11 14:05:10.176795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.004 ms 00:24:17.159 [2024-12-11 14:05:10.176805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.205843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.206034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:17.419 [2024-12-11 14:05:10.206058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.034 ms 00:24:17.419 [2024-12-11 14:05:10.206070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.224235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.224274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:17.419 [2024-12-11 14:05:10.224287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.097 ms 00:24:17.419 [2024-12-11 14:05:10.224297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.242076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.242121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:17.419 [2024-12-11 14:05:10.242134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.768 ms 00:24:17.419 [2024-12-11 14:05:10.242160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.242907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.242931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:17.419 [2024-12-11 14:05:10.242947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.634 ms 00:24:17.419 [2024-12-11 14:05:10.242956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.326769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.327098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:17.419 [2024-12-11 14:05:10.327139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.924 ms 00:24:17.419 [2024-12-11 14:05:10.327150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.338372] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:17.419 [2024-12-11 14:05:10.341606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.341638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:17.419 [2024-12-11 14:05:10.341653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.425 ms 00:24:17.419 [2024-12-11 14:05:10.341680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.341781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.341796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:17.419 [2024-12-11 14:05:10.341808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:17.419 [2024-12-11 14:05:10.341822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.341909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.341923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:17.419 [2024-12-11 14:05:10.341934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:17.419 [2024-12-11 14:05:10.341944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.341969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.341980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:17.419 [2024-12-11 14:05:10.341990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:17.419 [2024-12-11 14:05:10.342000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.342036] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:17.419 [2024-12-11 14:05:10.342047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.342058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:17.419 [2024-12-11 14:05:10.342068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:17.419 [2024-12-11 14:05:10.342077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.378832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.378889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:17.419 [2024-12-11 14:05:10.378927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.759 ms 00:24:17.419 [2024-12-11 14:05:10.378939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.379022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:17.419 [2024-12-11 14:05:10.379034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:17.419 [2024-12-11 14:05:10.379046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:17.419 [2024-12-11 14:05:10.379056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:17.419 [2024-12-11 14:05:10.380153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 371.122 ms, result 0 00:24:18.355  [2024-12-11T14:05:12.780Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-11T14:05:13.717Z] Copying: 50/1024 [MB] (25 MBps) [2024-12-11T14:05:14.656Z] Copying: 75/1024 [MB] (24 MBps) [2024-12-11T14:05:15.602Z] Copying: 100/1024 [MB] (24 MBps) [2024-12-11T14:05:16.539Z] Copying: 125/1024 [MB] (25 MBps) [2024-12-11T14:05:17.476Z] Copying: 150/1024 [MB] (24 MBps) [2024-12-11T14:05:18.412Z] Copying: 174/1024 [MB] (24 MBps) [2024-12-11T14:05:19.388Z] Copying: 199/1024 [MB] (24 MBps) [2024-12-11T14:05:20.763Z] Copying: 223/1024 [MB] (24 MBps) [2024-12-11T14:05:21.699Z] Copying: 248/1024 [MB] (24 MBps) [2024-12-11T14:05:22.635Z] Copying: 271/1024 [MB] (23 MBps) [2024-12-11T14:05:23.588Z] Copying: 295/1024 [MB] (24 MBps) [2024-12-11T14:05:24.526Z] Copying: 320/1024 [MB] (24 MBps) [2024-12-11T14:05:25.463Z] Copying: 345/1024 [MB] (24 MBps) [2024-12-11T14:05:26.398Z] Copying: 369/1024 [MB] (24 MBps) [2024-12-11T14:05:27.795Z] Copying: 393/1024 [MB] (24 MBps) [2024-12-11T14:05:28.731Z] Copying: 418/1024 [MB] (24 MBps) [2024-12-11T14:05:29.667Z] Copying: 442/1024 [MB] (24 MBps) [2024-12-11T14:05:30.603Z] Copying: 466/1024 [MB] (23 MBps) [2024-12-11T14:05:31.538Z] Copying: 490/1024 [MB] (24 MBps) [2024-12-11T14:05:32.501Z] Copying: 516/1024 [MB] (25 MBps) [2024-12-11T14:05:33.442Z] Copying: 541/1024 [MB] (25 MBps) [2024-12-11T14:05:34.380Z] Copying: 566/1024 [MB] (25 MBps) [2024-12-11T14:05:35.763Z] Copying: 592/1024 [MB] (25 MBps) [2024-12-11T14:05:36.699Z] Copying: 617/1024 [MB] (25 MBps) [2024-12-11T14:05:37.635Z] Copying: 642/1024 [MB] (24 MBps) [2024-12-11T14:05:38.571Z] Copying: 668/1024 [MB] (26 MBps) [2024-12-11T14:05:39.508Z] Copying: 692/1024 [MB] (23 MBps) [2024-12-11T14:05:40.444Z] Copying: 716/1024 [MB] (24 MBps) [2024-12-11T14:05:41.381Z] Copying: 730/1024 [MB] (13 MBps) [2024-12-11T14:05:42.758Z] Copying: 754/1024 [MB] (23 MBps) [2024-12-11T14:05:43.694Z] Copying: 778/1024 [MB] (24 MBps) [2024-12-11T14:05:44.681Z] Copying: 802/1024 [MB] (23 MBps) [2024-12-11T14:05:45.623Z] Copying: 826/1024 [MB] (23 MBps) [2024-12-11T14:05:46.559Z] Copying: 850/1024 [MB] (24 MBps) [2024-12-11T14:05:47.495Z] Copying: 873/1024 [MB] (23 MBps) [2024-12-11T14:05:48.432Z] Copying: 897/1024 [MB] (23 MBps) [2024-12-11T14:05:49.368Z] Copying: 921/1024 [MB] (23 MBps) [2024-12-11T14:05:50.747Z] Copying: 945/1024 [MB] (24 MBps) [2024-12-11T14:05:51.683Z] Copying: 970/1024 [MB] (24 MBps) [2024-12-11T14:05:52.619Z] Copying: 994/1024 [MB] (24 MBps) [2024-12-11T14:05:53.554Z] Copying: 1018/1024 [MB] (24 MBps) [2024-12-11T14:05:53.554Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-11 14:05:53.287141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.507 [2024-12-11 14:05:53.287388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:00.507 [2024-12-11 14:05:53.287504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:00.507 [2024-12-11 14:05:53.287545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.507 [2024-12-11 14:05:53.289412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:00.507 [2024-12-11 14:05:53.295420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.507 [2024-12-11 14:05:53.295460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:00.507 [2024-12-11 14:05:53.295475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.854 ms 00:25:00.507 [2024-12-11 14:05:53.295487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.507 [2024-12-11 14:05:53.305981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.507 [2024-12-11 14:05:53.306025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:00.507 [2024-12-11 14:05:53.306041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.667 ms 00:25:00.507 [2024-12-11 14:05:53.306059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.507 [2024-12-11 14:05:53.329976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.507 [2024-12-11 14:05:53.330207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:00.507 [2024-12-11 14:05:53.330233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.935 ms 00:25:00.507 [2024-12-11 14:05:53.330245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.507 [2024-12-11 14:05:53.335289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.507 [2024-12-11 14:05:53.335326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:00.507 [2024-12-11 14:05:53.335340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.012 ms 00:25:00.507 [2024-12-11 14:05:53.335358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.507 [2024-12-11 14:05:53.372021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.507 [2024-12-11 14:05:53.372072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:00.507 [2024-12-11 14:05:53.372088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.667 ms 00:25:00.507 [2024-12-11 14:05:53.372099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.507 [2024-12-11 14:05:53.392739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.507 [2024-12-11 14:05:53.392788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:00.507 [2024-12-11 14:05:53.392803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.630 ms 00:25:00.507 [2024-12-11 14:05:53.392813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.507 [2024-12-11 14:05:53.513417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.507 [2024-12-11 14:05:53.513520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:00.507 [2024-12-11 14:05:53.513538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.737 ms 00:25:00.507 [2024-12-11 14:05:53.513549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.507 [2024-12-11 14:05:53.551279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.507 [2024-12-11 14:05:53.551338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:00.507 [2024-12-11 14:05:53.551355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.769 ms 00:25:00.507 [2024-12-11 14:05:53.551366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.767 [2024-12-11 14:05:53.588288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.767 [2024-12-11 14:05:53.588341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:00.767 [2024-12-11 14:05:53.588357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.930 ms 00:25:00.767 [2024-12-11 14:05:53.588368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.767 [2024-12-11 14:05:53.624772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.767 [2024-12-11 14:05:53.624836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:00.767 [2024-12-11 14:05:53.624853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.417 ms 00:25:00.767 [2024-12-11 14:05:53.624863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.767 [2024-12-11 14:05:53.660687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.767 [2024-12-11 14:05:53.660754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:00.767 [2024-12-11 14:05:53.660769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.793 ms 00:25:00.767 [2024-12-11 14:05:53.660780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.767 [2024-12-11 14:05:53.660841] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:00.767 [2024-12-11 14:05:53.660859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 109056 / 261120 wr_cnt: 1 state: open 00:25:00.767 [2024-12-11 14:05:53.660874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.660994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:00.767 [2024-12-11 14:05:53.661310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.661986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.662004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.662022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.662043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.662060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.662071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.662082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.662093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.662111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:00.768 [2024-12-11 14:05:53.662130] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:00.768 [2024-12-11 14:05:53.662141] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d0422185-0cc6-471c-ae4b-64140d5ed839 00:25:00.768 [2024-12-11 14:05:53.662152] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 109056 00:25:00.768 [2024-12-11 14:05:53.662162] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 110016 00:25:00.768 [2024-12-11 14:05:53.662172] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 109056 00:25:00.768 [2024-12-11 14:05:53.662183] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:25:00.768 [2024-12-11 14:05:53.662212] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:00.768 [2024-12-11 14:05:53.662223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:00.768 [2024-12-11 14:05:53.662233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:00.768 [2024-12-11 14:05:53.662244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:00.768 [2024-12-11 14:05:53.662258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:00.768 [2024-12-11 14:05:53.662276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.768 [2024-12-11 14:05:53.662294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:00.768 [2024-12-11 14:05:53.662311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:25:00.768 [2024-12-11 14:05:53.662322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.768 [2024-12-11 14:05:53.681889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.768 [2024-12-11 14:05:53.681936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:00.768 [2024-12-11 14:05:53.681958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.554 ms 00:25:00.768 [2024-12-11 14:05:53.681968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.768 [2024-12-11 14:05:53.682524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.768 [2024-12-11 14:05:53.682535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:00.768 [2024-12-11 14:05:53.682547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:25:00.768 [2024-12-11 14:05:53.682556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.768 [2024-12-11 14:05:53.735103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.768 [2024-12-11 14:05:53.735161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.768 [2024-12-11 14:05:53.735176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.768 [2024-12-11 14:05:53.735186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.768 [2024-12-11 14:05:53.735257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.768 [2024-12-11 14:05:53.735268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.768 [2024-12-11 14:05:53.735279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.768 [2024-12-11 14:05:53.735289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.768 [2024-12-11 14:05:53.735388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.768 [2024-12-11 14:05:53.735402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.768 [2024-12-11 14:05:53.735417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.768 [2024-12-11 14:05:53.735427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.768 [2024-12-11 14:05:53.735444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.768 [2024-12-11 14:05:53.735455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.768 [2024-12-11 14:05:53.735465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.768 [2024-12-11 14:05:53.735475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.027 [2024-12-11 14:05:53.859373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.027 [2024-12-11 14:05:53.859440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:01.027 [2024-12-11 14:05:53.859455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.027 [2024-12-11 14:05:53.859467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.027 [2024-12-11 14:05:53.960805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.027 [2024-12-11 14:05:53.960890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:01.027 [2024-12-11 14:05:53.960906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.027 [2024-12-11 14:05:53.960917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.027 [2024-12-11 14:05:53.961018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.027 [2024-12-11 14:05:53.961031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:01.027 [2024-12-11 14:05:53.961042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.027 [2024-12-11 14:05:53.961058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.027 [2024-12-11 14:05:53.961108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.027 [2024-12-11 14:05:53.961120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:01.027 [2024-12-11 14:05:53.961130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.027 [2024-12-11 14:05:53.961140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.027 [2024-12-11 14:05:53.961254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.027 [2024-12-11 14:05:53.961274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:01.027 [2024-12-11 14:05:53.961290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.027 [2024-12-11 14:05:53.961311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.027 [2024-12-11 14:05:53.961359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.027 [2024-12-11 14:05:53.961372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:01.027 [2024-12-11 14:05:53.961382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.027 [2024-12-11 14:05:53.961392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.027 [2024-12-11 14:05:53.961430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.027 [2024-12-11 14:05:53.961442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:01.027 [2024-12-11 14:05:53.961452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.027 [2024-12-11 14:05:53.961461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.027 [2024-12-11 14:05:53.961525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.027 [2024-12-11 14:05:53.961540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:01.027 [2024-12-11 14:05:53.961550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.027 [2024-12-11 14:05:53.961561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.028 [2024-12-11 14:05:53.961687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 678.470 ms, result 0 00:25:02.930 00:25:02.930 00:25:02.930 14:05:55 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:02.930 [2024-12-11 14:05:55.638807] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:25:02.930 [2024-12-11 14:05:55.638964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81692 ] 00:25:02.930 [2024-12-11 14:05:55.817900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.930 [2024-12-11 14:05:55.933858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.507 [2024-12-11 14:05:56.295772] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.507 [2024-12-11 14:05:56.295871] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.507 [2024-12-11 14:05:56.456917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.507 [2024-12-11 14:05:56.457214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:03.507 [2024-12-11 14:05:56.457244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:03.507 [2024-12-11 14:05:56.457256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.507 [2024-12-11 14:05:56.457326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.507 [2024-12-11 14:05:56.457343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:03.507 [2024-12-11 14:05:56.457355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:03.507 [2024-12-11 14:05:56.457366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.507 [2024-12-11 14:05:56.457390] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:03.507 [2024-12-11 14:05:56.458444] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:03.507 [2024-12-11 14:05:56.458467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.507 [2024-12-11 14:05:56.458478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:03.507 [2024-12-11 14:05:56.458490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:25:03.507 [2024-12-11 14:05:56.458500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.507 [2024-12-11 14:05:56.460027] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:03.507 [2024-12-11 14:05:56.479214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.507 [2024-12-11 14:05:56.479408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:03.507 [2024-12-11 14:05:56.479432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.218 ms 00:25:03.507 [2024-12-11 14:05:56.479445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.507 [2024-12-11 14:05:56.479541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.507 [2024-12-11 14:05:56.479555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:03.507 [2024-12-11 14:05:56.479566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:03.507 [2024-12-11 14:05:56.479576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.507 [2024-12-11 14:05:56.486461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.507 [2024-12-11 14:05:56.486668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:03.507 [2024-12-11 14:05:56.486691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.817 ms 00:25:03.507 [2024-12-11 14:05:56.486708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.507 [2024-12-11 14:05:56.486797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.507 [2024-12-11 14:05:56.486809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:03.507 [2024-12-11 14:05:56.486820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:03.507 [2024-12-11 14:05:56.486850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.507 [2024-12-11 14:05:56.486900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.507 [2024-12-11 14:05:56.486912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:03.507 [2024-12-11 14:05:56.486923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:03.507 [2024-12-11 14:05:56.486933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.507 [2024-12-11 14:05:56.486962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:03.507 [2024-12-11 14:05:56.491807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.507 [2024-12-11 14:05:56.491854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:03.507 [2024-12-11 14:05:56.491870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.858 ms 00:25:03.507 [2024-12-11 14:05:56.491880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.507 [2024-12-11 14:05:56.491916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.507 [2024-12-11 14:05:56.491927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:03.507 [2024-12-11 14:05:56.491937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:03.507 [2024-12-11 14:05:56.491947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.507 [2024-12-11 14:05:56.492007] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:03.507 [2024-12-11 14:05:56.492032] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:03.507 [2024-12-11 14:05:56.492067] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:03.507 [2024-12-11 14:05:56.492088] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:03.507 [2024-12-11 14:05:56.492177] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:03.507 [2024-12-11 14:05:56.492190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:03.507 [2024-12-11 14:05:56.492205] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:03.507 [2024-12-11 14:05:56.492224] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:03.508 [2024-12-11 14:05:56.492245] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:03.508 [2024-12-11 14:05:56.492257] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:03.508 [2024-12-11 14:05:56.492267] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:03.508 [2024-12-11 14:05:56.492277] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:03.508 [2024-12-11 14:05:56.492290] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:03.508 [2024-12-11 14:05:56.492301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.508 [2024-12-11 14:05:56.492311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:03.508 [2024-12-11 14:05:56.492322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:25:03.508 [2024-12-11 14:05:56.492331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.508 [2024-12-11 14:05:56.492408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.508 [2024-12-11 14:05:56.492425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:03.508 [2024-12-11 14:05:56.492443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:03.508 [2024-12-11 14:05:56.492459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.508 [2024-12-11 14:05:56.492554] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:03.508 [2024-12-11 14:05:56.492572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:03.508 [2024-12-11 14:05:56.492583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.508 [2024-12-11 14:05:56.492594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:03.508 [2024-12-11 14:05:56.492614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:03.508 [2024-12-11 14:05:56.492632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:03.508 [2024-12-11 14:05:56.492642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.508 [2024-12-11 14:05:56.492661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:03.508 [2024-12-11 14:05:56.492676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:03.508 [2024-12-11 14:05:56.492692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.508 [2024-12-11 14:05:56.492715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:03.508 [2024-12-11 14:05:56.492724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:03.508 [2024-12-11 14:05:56.492734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:03.508 [2024-12-11 14:05:56.492752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:03.508 [2024-12-11 14:05:56.492761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:03.508 [2024-12-11 14:05:56.492781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.508 [2024-12-11 14:05:56.492799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:03.508 [2024-12-11 14:05:56.492809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.508 [2024-12-11 14:05:56.492838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:03.508 [2024-12-11 14:05:56.492847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.508 [2024-12-11 14:05:56.492865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:03.508 [2024-12-11 14:05:56.492874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.508 [2024-12-11 14:05:56.492893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:03.508 [2024-12-11 14:05:56.492903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.508 [2024-12-11 14:05:56.492922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:03.508 [2024-12-11 14:05:56.492931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:03.508 [2024-12-11 14:05:56.492940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.508 [2024-12-11 14:05:56.492950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:03.508 [2024-12-11 14:05:56.492959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:03.508 [2024-12-11 14:05:56.492967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.508 [2024-12-11 14:05:56.492976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:03.508 [2024-12-11 14:05:56.492985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:03.508 [2024-12-11 14:05:56.492994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.508 [2024-12-11 14:05:56.493004] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:03.508 [2024-12-11 14:05:56.493014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:03.508 [2024-12-11 14:05:56.493024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.508 [2024-12-11 14:05:56.493034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.508 [2024-12-11 14:05:56.493044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:03.508 [2024-12-11 14:05:56.493053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:03.508 [2024-12-11 14:05:56.493062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:03.508 [2024-12-11 14:05:56.493071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:03.508 [2024-12-11 14:05:56.493079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:03.508 [2024-12-11 14:05:56.493089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:03.508 [2024-12-11 14:05:56.493100] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:03.508 [2024-12-11 14:05:56.493112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.508 [2024-12-11 14:05:56.493128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:03.508 [2024-12-11 14:05:56.493139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:03.508 [2024-12-11 14:05:56.493149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:03.508 [2024-12-11 14:05:56.493159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:03.508 [2024-12-11 14:05:56.493170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:03.508 [2024-12-11 14:05:56.493180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:03.508 [2024-12-11 14:05:56.493191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:03.508 [2024-12-11 14:05:56.493201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:03.508 [2024-12-11 14:05:56.493212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:03.508 [2024-12-11 14:05:56.493222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:03.508 [2024-12-11 14:05:56.493232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:03.508 [2024-12-11 14:05:56.493243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:03.508 [2024-12-11 14:05:56.493253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:03.508 [2024-12-11 14:05:56.493264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:03.508 [2024-12-11 14:05:56.493274] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:03.508 [2024-12-11 14:05:56.493285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.508 [2024-12-11 14:05:56.493296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:03.508 [2024-12-11 14:05:56.493306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:03.508 [2024-12-11 14:05:56.493316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:03.508 [2024-12-11 14:05:56.493326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:03.508 [2024-12-11 14:05:56.493337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.508 [2024-12-11 14:05:56.493348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:03.508 [2024-12-11 14:05:56.493358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:25:03.508 [2024-12-11 14:05:56.493368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.508 [2024-12-11 14:05:56.532559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.508 [2024-12-11 14:05:56.532791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:03.508 [2024-12-11 14:05:56.532955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.202 ms 00:25:03.508 [2024-12-11 14:05:56.533005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.508 [2024-12-11 14:05:56.533128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.508 [2024-12-11 14:05:56.533298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:03.508 [2024-12-11 14:05:56.533340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:03.508 [2024-12-11 14:05:56.533371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.593109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.593304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:03.781 [2024-12-11 14:05:56.593406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.721 ms 00:25:03.781 [2024-12-11 14:05:56.593448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.593588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.593631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:03.781 [2024-12-11 14:05:56.593669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:03.781 [2024-12-11 14:05:56.593753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.594388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.594527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:03.781 [2024-12-11 14:05:56.594547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:25:03.781 [2024-12-11 14:05:56.594557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.594690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.594704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:03.781 [2024-12-11 14:05:56.594718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:25:03.781 [2024-12-11 14:05:56.594728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.614852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.615041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:03.781 [2024-12-11 14:05:56.615066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.134 ms 00:25:03.781 [2024-12-11 14:05:56.615077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.634964] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:03.781 [2024-12-11 14:05:56.635011] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:03.781 [2024-12-11 14:05:56.635026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.635037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:03.781 [2024-12-11 14:05:56.635050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.847 ms 00:25:03.781 [2024-12-11 14:05:56.635059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.665013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.665069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:03.781 [2024-12-11 14:05:56.665085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.938 ms 00:25:03.781 [2024-12-11 14:05:56.665096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.684001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.684052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:03.781 [2024-12-11 14:05:56.684067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.872 ms 00:25:03.781 [2024-12-11 14:05:56.684078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.702397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.702594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:03.781 [2024-12-11 14:05:56.702619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.297 ms 00:25:03.781 [2024-12-11 14:05:56.702630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.703543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.703572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:03.781 [2024-12-11 14:05:56.703589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:25:03.781 [2024-12-11 14:05:56.703600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.791848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.792146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:03.781 [2024-12-11 14:05:56.792184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.359 ms 00:25:03.781 [2024-12-11 14:05:56.792196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.803996] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:03.781 [2024-12-11 14:05:56.807223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.807395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:03.781 [2024-12-11 14:05:56.807421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.990 ms 00:25:03.781 [2024-12-11 14:05:56.807433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.807547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.807561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:03.781 [2024-12-11 14:05:56.807572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:03.781 [2024-12-11 14:05:56.807586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.809125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.809164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:03.781 [2024-12-11 14:05:56.809177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:25:03.781 [2024-12-11 14:05:56.809188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.809226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.809237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:03.781 [2024-12-11 14:05:56.809247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:03.781 [2024-12-11 14:05:56.809257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.781 [2024-12-11 14:05:56.809297] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:03.781 [2024-12-11 14:05:56.809311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.781 [2024-12-11 14:05:56.809321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:03.781 [2024-12-11 14:05:56.809332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:03.781 [2024-12-11 14:05:56.809341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.040 [2024-12-11 14:05:56.846123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.040 [2024-12-11 14:05:56.846344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:04.040 [2024-12-11 14:05:56.846376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.821 ms 00:25:04.040 [2024-12-11 14:05:56.846387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.040 [2024-12-11 14:05:56.846462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.040 [2024-12-11 14:05:56.846474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:04.040 [2024-12-11 14:05:56.846485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:04.040 [2024-12-11 14:05:56.846496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.040 [2024-12-11 14:05:56.847618] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.879 ms, result 0 00:25:05.418  [2024-12-11T14:05:59.401Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-11T14:06:00.337Z] Copying: 49/1024 [MB] (26 MBps) [2024-12-11T14:06:01.273Z] Copying: 76/1024 [MB] (27 MBps) [2024-12-11T14:06:02.210Z] Copying: 101/1024 [MB] (25 MBps) [2024-12-11T14:06:03.147Z] Copying: 126/1024 [MB] (25 MBps) [2024-12-11T14:06:04.084Z] Copying: 152/1024 [MB] (25 MBps) [2024-12-11T14:06:05.461Z] Copying: 177/1024 [MB] (25 MBps) [2024-12-11T14:06:06.399Z] Copying: 203/1024 [MB] (26 MBps) [2024-12-11T14:06:07.337Z] Copying: 229/1024 [MB] (25 MBps) [2024-12-11T14:06:08.275Z] Copying: 254/1024 [MB] (25 MBps) [2024-12-11T14:06:09.226Z] Copying: 280/1024 [MB] (25 MBps) [2024-12-11T14:06:10.174Z] Copying: 306/1024 [MB] (25 MBps) [2024-12-11T14:06:11.112Z] Copying: 331/1024 [MB] (25 MBps) [2024-12-11T14:06:12.052Z] Copying: 356/1024 [MB] (25 MBps) [2024-12-11T14:06:13.433Z] Copying: 381/1024 [MB] (25 MBps) [2024-12-11T14:06:14.373Z] Copying: 407/1024 [MB] (25 MBps) [2024-12-11T14:06:15.312Z] Copying: 432/1024 [MB] (25 MBps) [2024-12-11T14:06:16.250Z] Copying: 458/1024 [MB] (25 MBps) [2024-12-11T14:06:17.188Z] Copying: 483/1024 [MB] (25 MBps) [2024-12-11T14:06:18.126Z] Copying: 509/1024 [MB] (25 MBps) [2024-12-11T14:06:19.066Z] Copying: 534/1024 [MB] (25 MBps) [2024-12-11T14:06:20.446Z] Copying: 559/1024 [MB] (25 MBps) [2024-12-11T14:06:21.385Z] Copying: 584/1024 [MB] (25 MBps) [2024-12-11T14:06:22.353Z] Copying: 611/1024 [MB] (26 MBps) [2024-12-11T14:06:23.289Z] Copying: 637/1024 [MB] (26 MBps) [2024-12-11T14:06:24.225Z] Copying: 664/1024 [MB] (26 MBps) [2024-12-11T14:06:25.174Z] Copying: 690/1024 [MB] (25 MBps) [2024-12-11T14:06:26.112Z] Copying: 716/1024 [MB] (26 MBps) [2024-12-11T14:06:27.049Z] Copying: 742/1024 [MB] (26 MBps) [2024-12-11T14:06:28.428Z] Copying: 769/1024 [MB] (26 MBps) [2024-12-11T14:06:29.366Z] Copying: 795/1024 [MB] (26 MBps) [2024-12-11T14:06:30.304Z] Copying: 821/1024 [MB] (26 MBps) [2024-12-11T14:06:31.241Z] Copying: 848/1024 [MB] (26 MBps) [2024-12-11T14:06:32.178Z] Copying: 874/1024 [MB] (26 MBps) [2024-12-11T14:06:33.115Z] Copying: 901/1024 [MB] (26 MBps) [2024-12-11T14:06:34.054Z] Copying: 926/1024 [MB] (25 MBps) [2024-12-11T14:06:35.432Z] Copying: 951/1024 [MB] (25 MBps) [2024-12-11T14:06:36.370Z] Copying: 977/1024 [MB] (25 MBps) [2024-12-11T14:06:36.939Z] Copying: 1004/1024 [MB] (27 MBps) [2024-12-11T14:06:36.939Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-11 14:06:36.811628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.892 [2024-12-11 14:06:36.811710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:43.892 [2024-12-11 14:06:36.811731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:43.892 [2024-12-11 14:06:36.811752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.892 [2024-12-11 14:06:36.811784] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:43.892 [2024-12-11 14:06:36.819212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.892 [2024-12-11 14:06:36.819291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:43.892 [2024-12-11 14:06:36.819314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.416 ms 00:25:43.892 [2024-12-11 14:06:36.819331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.892 [2024-12-11 14:06:36.819693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.892 [2024-12-11 14:06:36.819716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:43.892 [2024-12-11 14:06:36.819735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:25:43.893 [2024-12-11 14:06:36.819763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.893 [2024-12-11 14:06:36.825886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.893 [2024-12-11 14:06:36.826070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:43.893 [2024-12-11 14:06:36.826105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.105 ms 00:25:43.893 [2024-12-11 14:06:36.826120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.893 [2024-12-11 14:06:36.832617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.893 [2024-12-11 14:06:36.832664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:43.893 [2024-12-11 14:06:36.832679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.461 ms 00:25:43.893 [2024-12-11 14:06:36.832696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.893 [2024-12-11 14:06:36.869517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.893 [2024-12-11 14:06:36.869563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:43.893 [2024-12-11 14:06:36.869578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.835 ms 00:25:43.893 [2024-12-11 14:06:36.869588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.893 [2024-12-11 14:06:36.890696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.893 [2024-12-11 14:06:36.890742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:43.893 [2024-12-11 14:06:36.890757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.099 ms 00:25:43.893 [2024-12-11 14:06:36.890769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.153 [2024-12-11 14:06:37.040505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.153 [2024-12-11 14:06:37.040594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:44.153 [2024-12-11 14:06:37.040613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 149.931 ms 00:25:44.153 [2024-12-11 14:06:37.040624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.153 [2024-12-11 14:06:37.078115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.153 [2024-12-11 14:06:37.078172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:44.153 [2024-12-11 14:06:37.078188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.531 ms 00:25:44.153 [2024-12-11 14:06:37.078198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.153 [2024-12-11 14:06:37.115523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.153 [2024-12-11 14:06:37.115575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:44.153 [2024-12-11 14:06:37.115590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.340 ms 00:25:44.153 [2024-12-11 14:06:37.115616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.153 [2024-12-11 14:06:37.151282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.153 [2024-12-11 14:06:37.151492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:44.153 [2024-12-11 14:06:37.151515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.676 ms 00:25:44.153 [2024-12-11 14:06:37.151526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.153 [2024-12-11 14:06:37.187176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.153 [2024-12-11 14:06:37.187222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:44.153 [2024-12-11 14:06:37.187238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.622 ms 00:25:44.153 [2024-12-11 14:06:37.187248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.153 [2024-12-11 14:06:37.187291] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:44.153 [2024-12-11 14:06:37.187308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:44.153 [2024-12-11 14:06:37.187322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:44.153 [2024-12-11 14:06:37.187761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.187993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:44.154 [2024-12-11 14:06:37.188397] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:44.154 [2024-12-11 14:06:37.188407] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d0422185-0cc6-471c-ae4b-64140d5ed839 00:25:44.154 [2024-12-11 14:06:37.188418] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:44.154 [2024-12-11 14:06:37.188428] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 22976 00:25:44.154 [2024-12-11 14:06:37.188438] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 22016 00:25:44.154 [2024-12-11 14:06:37.188448] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0436 00:25:44.154 [2024-12-11 14:06:37.188463] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:44.154 [2024-12-11 14:06:37.188484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:44.154 [2024-12-11 14:06:37.188494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:44.154 [2024-12-11 14:06:37.188504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:44.154 [2024-12-11 14:06:37.188512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:44.154 [2024-12-11 14:06:37.188522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.154 [2024-12-11 14:06:37.188532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:44.154 [2024-12-11 14:06:37.188543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:25:44.154 [2024-12-11 14:06:37.188553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.414 [2024-12-11 14:06:37.208507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.414 [2024-12-11 14:06:37.208550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:44.414 [2024-12-11 14:06:37.208570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.944 ms 00:25:44.414 [2024-12-11 14:06:37.208597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.414 [2024-12-11 14:06:37.209171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.414 [2024-12-11 14:06:37.209188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:44.414 [2024-12-11 14:06:37.209200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:25:44.414 [2024-12-11 14:06:37.209209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.414 [2024-12-11 14:06:37.262044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.414 [2024-12-11 14:06:37.262295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.414 [2024-12-11 14:06:37.262320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.414 [2024-12-11 14:06:37.262331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.414 [2024-12-11 14:06:37.262422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.414 [2024-12-11 14:06:37.262436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.414 [2024-12-11 14:06:37.262447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.414 [2024-12-11 14:06:37.262457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.414 [2024-12-11 14:06:37.262562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.414 [2024-12-11 14:06:37.262575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.414 [2024-12-11 14:06:37.262590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.414 [2024-12-11 14:06:37.262600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.414 [2024-12-11 14:06:37.262617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.414 [2024-12-11 14:06:37.262628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.414 [2024-12-11 14:06:37.262639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.414 [2024-12-11 14:06:37.262648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.414 [2024-12-11 14:06:37.387521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.414 [2024-12-11 14:06:37.387595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:44.414 [2024-12-11 14:06:37.387611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.414 [2024-12-11 14:06:37.387621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.673 [2024-12-11 14:06:37.489254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.673 [2024-12-11 14:06:37.489551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:44.673 [2024-12-11 14:06:37.489576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.673 [2024-12-11 14:06:37.489588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.673 [2024-12-11 14:06:37.489687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.673 [2024-12-11 14:06:37.489699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:44.673 [2024-12-11 14:06:37.489710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.673 [2024-12-11 14:06:37.489724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.673 [2024-12-11 14:06:37.489768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.673 [2024-12-11 14:06:37.489780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:44.673 [2024-12-11 14:06:37.489790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.673 [2024-12-11 14:06:37.489799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.673 [2024-12-11 14:06:37.489943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.673 [2024-12-11 14:06:37.489958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:44.673 [2024-12-11 14:06:37.489969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.673 [2024-12-11 14:06:37.489978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.673 [2024-12-11 14:06:37.490032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.673 [2024-12-11 14:06:37.490045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:44.673 [2024-12-11 14:06:37.490055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.674 [2024-12-11 14:06:37.490065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.674 [2024-12-11 14:06:37.490110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.674 [2024-12-11 14:06:37.490122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:44.674 [2024-12-11 14:06:37.490131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.674 [2024-12-11 14:06:37.490142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.674 [2024-12-11 14:06:37.490185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.674 [2024-12-11 14:06:37.490197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:44.674 [2024-12-11 14:06:37.490207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.674 [2024-12-11 14:06:37.490216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.674 [2024-12-11 14:06:37.490335] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 679.806 ms, result 0 00:25:45.611 00:25:45.611 00:25:45.611 14:06:38 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:47.516 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80073 00:25:47.516 14:06:40 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 80073 ']' 00:25:47.516 Process with pid 80073 is not found 00:25:47.516 14:06:40 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 80073 00:25:47.516 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80073) - No such process 00:25:47.516 14:06:40 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 80073 is not found' 00:25:47.516 Remove shared memory files 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:47.516 14:06:40 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:47.516 ************************************ 00:25:47.516 END TEST ftl_restore 00:25:47.516 ************************************ 00:25:47.516 00:25:47.516 real 3m22.046s 00:25:47.516 user 3m9.127s 00:25:47.516 sys 0m13.885s 00:25:47.516 14:06:40 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.516 14:06:40 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:47.516 14:06:40 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:47.516 14:06:40 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:47.516 14:06:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.516 14:06:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:47.516 ************************************ 00:25:47.516 START TEST ftl_dirty_shutdown 00:25:47.516 ************************************ 00:25:47.774 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:47.774 * Looking for test storage... 00:25:47.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.775 --rc genhtml_branch_coverage=1 00:25:47.775 --rc genhtml_function_coverage=1 00:25:47.775 --rc genhtml_legend=1 00:25:47.775 --rc geninfo_all_blocks=1 00:25:47.775 --rc geninfo_unexecuted_blocks=1 00:25:47.775 00:25:47.775 ' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.775 --rc genhtml_branch_coverage=1 00:25:47.775 --rc genhtml_function_coverage=1 00:25:47.775 --rc genhtml_legend=1 00:25:47.775 --rc geninfo_all_blocks=1 00:25:47.775 --rc geninfo_unexecuted_blocks=1 00:25:47.775 00:25:47.775 ' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.775 --rc genhtml_branch_coverage=1 00:25:47.775 --rc genhtml_function_coverage=1 00:25:47.775 --rc genhtml_legend=1 00:25:47.775 --rc geninfo_all_blocks=1 00:25:47.775 --rc geninfo_unexecuted_blocks=1 00:25:47.775 00:25:47.775 ' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.775 --rc genhtml_branch_coverage=1 00:25:47.775 --rc genhtml_function_coverage=1 00:25:47.775 --rc genhtml_legend=1 00:25:47.775 --rc geninfo_all_blocks=1 00:25:47.775 --rc geninfo_unexecuted_blocks=1 00:25:47.775 00:25:47.775 ' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:47.775 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82209 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82209 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82209 ']' 00:25:48.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.034 14:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:48.034 [2024-12-11 14:06:40.934935] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:25:48.034 [2024-12-11 14:06:40.935054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82209 ] 00:25:48.292 [2024-12-11 14:06:41.102794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.293 [2024-12-11 14:06:41.218146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.229 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.229 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:49.229 14:06:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:49.229 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:49.229 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:49.229 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:49.229 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:49.229 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:49.488 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:49.488 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:49.488 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:49.489 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:49.489 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:49.489 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:49.489 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:49.489 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:49.748 { 00:25:49.748 "name": "nvme0n1", 00:25:49.748 "aliases": [ 00:25:49.748 "aaf77643-b77c-4f4a-ba64-802e09f6dede" 00:25:49.748 ], 00:25:49.748 "product_name": "NVMe disk", 00:25:49.748 "block_size": 4096, 00:25:49.748 "num_blocks": 1310720, 00:25:49.748 "uuid": "aaf77643-b77c-4f4a-ba64-802e09f6dede", 00:25:49.748 "numa_id": -1, 00:25:49.748 "assigned_rate_limits": { 00:25:49.748 "rw_ios_per_sec": 0, 00:25:49.748 "rw_mbytes_per_sec": 0, 00:25:49.748 "r_mbytes_per_sec": 0, 00:25:49.748 "w_mbytes_per_sec": 0 00:25:49.748 }, 00:25:49.748 "claimed": true, 00:25:49.748 "claim_type": "read_many_write_one", 00:25:49.748 "zoned": false, 00:25:49.748 "supported_io_types": { 00:25:49.748 "read": true, 00:25:49.748 "write": true, 00:25:49.748 "unmap": true, 00:25:49.748 "flush": true, 00:25:49.748 "reset": true, 00:25:49.748 "nvme_admin": true, 00:25:49.748 "nvme_io": true, 00:25:49.748 "nvme_io_md": false, 00:25:49.748 "write_zeroes": true, 00:25:49.748 "zcopy": false, 00:25:49.748 "get_zone_info": false, 00:25:49.748 "zone_management": false, 00:25:49.748 "zone_append": false, 00:25:49.748 "compare": true, 00:25:49.748 "compare_and_write": false, 00:25:49.748 "abort": true, 00:25:49.748 "seek_hole": false, 00:25:49.748 "seek_data": false, 00:25:49.748 "copy": true, 00:25:49.748 "nvme_iov_md": false 00:25:49.748 }, 00:25:49.748 "driver_specific": { 00:25:49.748 "nvme": [ 00:25:49.748 { 00:25:49.748 "pci_address": "0000:00:11.0", 00:25:49.748 "trid": { 00:25:49.748 "trtype": "PCIe", 00:25:49.748 "traddr": "0000:00:11.0" 00:25:49.748 }, 00:25:49.748 "ctrlr_data": { 00:25:49.748 "cntlid": 0, 00:25:49.748 "vendor_id": "0x1b36", 00:25:49.748 "model_number": "QEMU NVMe Ctrl", 00:25:49.748 "serial_number": "12341", 00:25:49.748 "firmware_revision": "8.0.0", 00:25:49.748 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:49.748 "oacs": { 00:25:49.748 "security": 0, 00:25:49.748 "format": 1, 00:25:49.748 "firmware": 0, 00:25:49.748 "ns_manage": 1 00:25:49.748 }, 00:25:49.748 "multi_ctrlr": false, 00:25:49.748 "ana_reporting": false 00:25:49.748 }, 00:25:49.748 "vs": { 00:25:49.748 "nvme_version": "1.4" 00:25:49.748 }, 00:25:49.748 "ns_data": { 00:25:49.748 "id": 1, 00:25:49.748 "can_share": false 00:25:49.748 } 00:25:49.748 } 00:25:49.748 ], 00:25:49.748 "mp_policy": "active_passive" 00:25:49.748 } 00:25:49.748 } 00:25:49.748 ]' 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:49.748 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:50.007 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=a4292fb2-0ed5-4e0f-a547-22e00a2d84c7 00:25:50.007 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:50.007 14:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4292fb2-0ed5-4e0f-a547-22e00a2d84c7 00:25:50.266 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:50.266 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a0ee91dc-b81e-4958-b749-578aa0ce787d 00:25:50.266 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a0ee91dc-b81e-4958-b749-578aa0ce787d 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:50.835 { 00:25:50.835 "name": "2e51c60a-0dc0-4e98-9b8d-9565249cd47f", 00:25:50.835 "aliases": [ 00:25:50.835 "lvs/nvme0n1p0" 00:25:50.835 ], 00:25:50.835 "product_name": "Logical Volume", 00:25:50.835 "block_size": 4096, 00:25:50.835 "num_blocks": 26476544, 00:25:50.835 "uuid": "2e51c60a-0dc0-4e98-9b8d-9565249cd47f", 00:25:50.835 "assigned_rate_limits": { 00:25:50.835 "rw_ios_per_sec": 0, 00:25:50.835 "rw_mbytes_per_sec": 0, 00:25:50.835 "r_mbytes_per_sec": 0, 00:25:50.835 "w_mbytes_per_sec": 0 00:25:50.835 }, 00:25:50.835 "claimed": false, 00:25:50.835 "zoned": false, 00:25:50.835 "supported_io_types": { 00:25:50.835 "read": true, 00:25:50.835 "write": true, 00:25:50.835 "unmap": true, 00:25:50.835 "flush": false, 00:25:50.835 "reset": true, 00:25:50.835 "nvme_admin": false, 00:25:50.835 "nvme_io": false, 00:25:50.835 "nvme_io_md": false, 00:25:50.835 "write_zeroes": true, 00:25:50.835 "zcopy": false, 00:25:50.835 "get_zone_info": false, 00:25:50.835 "zone_management": false, 00:25:50.835 "zone_append": false, 00:25:50.835 "compare": false, 00:25:50.835 "compare_and_write": false, 00:25:50.835 "abort": false, 00:25:50.835 "seek_hole": true, 00:25:50.835 "seek_data": true, 00:25:50.835 "copy": false, 00:25:50.835 "nvme_iov_md": false 00:25:50.835 }, 00:25:50.835 "driver_specific": { 00:25:50.835 "lvol": { 00:25:50.835 "lvol_store_uuid": "a0ee91dc-b81e-4958-b749-578aa0ce787d", 00:25:50.835 "base_bdev": "nvme0n1", 00:25:50.835 "thin_provision": true, 00:25:50.835 "num_allocated_clusters": 0, 00:25:50.835 "snapshot": false, 00:25:50.835 "clone": false, 00:25:50.835 "esnap_clone": false 00:25:50.835 } 00:25:50.835 } 00:25:50.835 } 00:25:50.835 ]' 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:50.835 14:06:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:51.103 14:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:51.103 14:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:51.378 14:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:51.378 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:51.378 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:51.378 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:51.378 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:51.378 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:51.378 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:51.378 { 00:25:51.378 "name": "2e51c60a-0dc0-4e98-9b8d-9565249cd47f", 00:25:51.378 "aliases": [ 00:25:51.378 "lvs/nvme0n1p0" 00:25:51.378 ], 00:25:51.378 "product_name": "Logical Volume", 00:25:51.378 "block_size": 4096, 00:25:51.378 "num_blocks": 26476544, 00:25:51.378 "uuid": "2e51c60a-0dc0-4e98-9b8d-9565249cd47f", 00:25:51.378 "assigned_rate_limits": { 00:25:51.378 "rw_ios_per_sec": 0, 00:25:51.378 "rw_mbytes_per_sec": 0, 00:25:51.378 "r_mbytes_per_sec": 0, 00:25:51.378 "w_mbytes_per_sec": 0 00:25:51.378 }, 00:25:51.378 "claimed": false, 00:25:51.378 "zoned": false, 00:25:51.378 "supported_io_types": { 00:25:51.378 "read": true, 00:25:51.378 "write": true, 00:25:51.378 "unmap": true, 00:25:51.378 "flush": false, 00:25:51.378 "reset": true, 00:25:51.378 "nvme_admin": false, 00:25:51.378 "nvme_io": false, 00:25:51.378 "nvme_io_md": false, 00:25:51.378 "write_zeroes": true, 00:25:51.378 "zcopy": false, 00:25:51.378 "get_zone_info": false, 00:25:51.378 "zone_management": false, 00:25:51.378 "zone_append": false, 00:25:51.378 "compare": false, 00:25:51.378 "compare_and_write": false, 00:25:51.378 "abort": false, 00:25:51.378 "seek_hole": true, 00:25:51.378 "seek_data": true, 00:25:51.378 "copy": false, 00:25:51.378 "nvme_iov_md": false 00:25:51.378 }, 00:25:51.378 "driver_specific": { 00:25:51.378 "lvol": { 00:25:51.378 "lvol_store_uuid": "a0ee91dc-b81e-4958-b749-578aa0ce787d", 00:25:51.378 "base_bdev": "nvme0n1", 00:25:51.378 "thin_provision": true, 00:25:51.378 "num_allocated_clusters": 0, 00:25:51.378 "snapshot": false, 00:25:51.378 "clone": false, 00:25:51.378 "esnap_clone": false 00:25:51.378 } 00:25:51.378 } 00:25:51.378 } 00:25:51.378 ]' 00:25:51.378 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:51.378 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:51.378 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:51.637 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e51c60a-0dc0-4e98-9b8d-9565249cd47f 00:25:51.896 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:51.896 { 00:25:51.896 "name": "2e51c60a-0dc0-4e98-9b8d-9565249cd47f", 00:25:51.896 "aliases": [ 00:25:51.896 "lvs/nvme0n1p0" 00:25:51.896 ], 00:25:51.896 "product_name": "Logical Volume", 00:25:51.896 "block_size": 4096, 00:25:51.896 "num_blocks": 26476544, 00:25:51.896 "uuid": "2e51c60a-0dc0-4e98-9b8d-9565249cd47f", 00:25:51.896 "assigned_rate_limits": { 00:25:51.896 "rw_ios_per_sec": 0, 00:25:51.896 "rw_mbytes_per_sec": 0, 00:25:51.896 "r_mbytes_per_sec": 0, 00:25:51.896 "w_mbytes_per_sec": 0 00:25:51.896 }, 00:25:51.896 "claimed": false, 00:25:51.896 "zoned": false, 00:25:51.896 "supported_io_types": { 00:25:51.896 "read": true, 00:25:51.896 "write": true, 00:25:51.896 "unmap": true, 00:25:51.896 "flush": false, 00:25:51.896 "reset": true, 00:25:51.896 "nvme_admin": false, 00:25:51.896 "nvme_io": false, 00:25:51.896 "nvme_io_md": false, 00:25:51.896 "write_zeroes": true, 00:25:51.896 "zcopy": false, 00:25:51.896 "get_zone_info": false, 00:25:51.896 "zone_management": false, 00:25:51.896 "zone_append": false, 00:25:51.896 "compare": false, 00:25:51.896 "compare_and_write": false, 00:25:51.896 "abort": false, 00:25:51.896 "seek_hole": true, 00:25:51.896 "seek_data": true, 00:25:51.896 "copy": false, 00:25:51.896 "nvme_iov_md": false 00:25:51.896 }, 00:25:51.896 "driver_specific": { 00:25:51.896 "lvol": { 00:25:51.896 "lvol_store_uuid": "a0ee91dc-b81e-4958-b749-578aa0ce787d", 00:25:51.896 "base_bdev": "nvme0n1", 00:25:51.896 "thin_provision": true, 00:25:51.896 "num_allocated_clusters": 0, 00:25:51.896 "snapshot": false, 00:25:51.896 "clone": false, 00:25:51.896 "esnap_clone": false 00:25:51.896 } 00:25:51.896 } 00:25:51.896 } 00:25:51.896 ]' 00:25:51.896 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:51.896 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:51.896 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:52.155 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:52.155 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:52.155 14:06:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:52.155 14:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:52.155 14:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2e51c60a-0dc0-4e98-9b8d-9565249cd47f --l2p_dram_limit 10' 00:25:52.155 14:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:52.155 14:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:52.155 14:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:52.155 14:06:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2e51c60a-0dc0-4e98-9b8d-9565249cd47f --l2p_dram_limit 10 -c nvc0n1p0 00:25:52.155 [2024-12-11 14:06:45.169128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.155 [2024-12-11 14:06:45.169414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:52.155 [2024-12-11 14:06:45.169446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:52.155 [2024-12-11 14:06:45.169458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.155 [2024-12-11 14:06:45.169541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.155 [2024-12-11 14:06:45.169554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.155 [2024-12-11 14:06:45.169567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:52.155 [2024-12-11 14:06:45.169578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.155 [2024-12-11 14:06:45.169620] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:52.155 [2024-12-11 14:06:45.170620] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:52.155 [2024-12-11 14:06:45.170660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.155 [2024-12-11 14:06:45.170672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.155 [2024-12-11 14:06:45.170685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:25:52.155 [2024-12-11 14:06:45.170696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.155 [2024-12-11 14:06:45.170773] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d06a086c-22c8-4d8c-a657-f1bbdab1dbcb 00:25:52.155 [2024-12-11 14:06:45.172199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.155 [2024-12-11 14:06:45.172234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:52.155 [2024-12-11 14:06:45.172247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:52.155 [2024-12-11 14:06:45.172260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.155 [2024-12-11 14:06:45.179716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.155 [2024-12-11 14:06:45.179756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.155 [2024-12-11 14:06:45.179768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.413 ms 00:25:52.155 [2024-12-11 14:06:45.179781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.155 [2024-12-11 14:06:45.179894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.155 [2024-12-11 14:06:45.179912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.155 [2024-12-11 14:06:45.179924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:52.155 [2024-12-11 14:06:45.179941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.155 [2024-12-11 14:06:45.180022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.155 [2024-12-11 14:06:45.180039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:52.156 [2024-12-11 14:06:45.180050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:52.156 [2024-12-11 14:06:45.180067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.156 [2024-12-11 14:06:45.180093] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:52.156 [2024-12-11 14:06:45.185217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.156 [2024-12-11 14:06:45.185250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.156 [2024-12-11 14:06:45.185266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.137 ms 00:25:52.156 [2024-12-11 14:06:45.185277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.156 [2024-12-11 14:06:45.185317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.156 [2024-12-11 14:06:45.185328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:52.156 [2024-12-11 14:06:45.185341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:52.156 [2024-12-11 14:06:45.185351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.156 [2024-12-11 14:06:45.185398] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:52.156 [2024-12-11 14:06:45.185529] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:52.156 [2024-12-11 14:06:45.185550] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:52.156 [2024-12-11 14:06:45.185564] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:52.156 [2024-12-11 14:06:45.185581] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:52.156 [2024-12-11 14:06:45.185593] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:52.156 [2024-12-11 14:06:45.185607] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:52.156 [2024-12-11 14:06:45.185617] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:52.156 [2024-12-11 14:06:45.185634] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:52.156 [2024-12-11 14:06:45.185644] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:52.156 [2024-12-11 14:06:45.185657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.156 [2024-12-11 14:06:45.185677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:52.156 [2024-12-11 14:06:45.185691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:25:52.156 [2024-12-11 14:06:45.185701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.156 [2024-12-11 14:06:45.185779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.156 [2024-12-11 14:06:45.185790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:52.156 [2024-12-11 14:06:45.185803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:52.156 [2024-12-11 14:06:45.185813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.156 [2024-12-11 14:06:45.185931] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:52.156 [2024-12-11 14:06:45.185944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:52.156 [2024-12-11 14:06:45.185957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:52.156 [2024-12-11 14:06:45.185967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.156 [2024-12-11 14:06:45.185980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:52.156 [2024-12-11 14:06:45.185990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:52.156 [2024-12-11 14:06:45.186011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:52.156 [2024-12-11 14:06:45.186023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:52.156 [2024-12-11 14:06:45.186044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:52.156 [2024-12-11 14:06:45.186054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:52.156 [2024-12-11 14:06:45.186067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:52.156 [2024-12-11 14:06:45.186076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:52.156 [2024-12-11 14:06:45.186088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:52.156 [2024-12-11 14:06:45.186105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:52.156 [2024-12-11 14:06:45.186129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:52.156 [2024-12-11 14:06:45.186141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:52.156 [2024-12-11 14:06:45.186162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.156 [2024-12-11 14:06:45.186185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:52.156 [2024-12-11 14:06:45.186194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.156 [2024-12-11 14:06:45.186214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:52.156 [2024-12-11 14:06:45.186226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.156 [2024-12-11 14:06:45.186247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:52.156 [2024-12-11 14:06:45.186256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.156 [2024-12-11 14:06:45.186276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:52.156 [2024-12-11 14:06:45.186291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:52.156 [2024-12-11 14:06:45.186312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:52.156 [2024-12-11 14:06:45.186321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:52.156 [2024-12-11 14:06:45.186332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:52.156 [2024-12-11 14:06:45.186341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:52.156 [2024-12-11 14:06:45.186354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:52.156 [2024-12-11 14:06:45.186363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:52.156 [2024-12-11 14:06:45.186384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:52.156 [2024-12-11 14:06:45.186396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186405] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:52.156 [2024-12-11 14:06:45.186417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:52.156 [2024-12-11 14:06:45.186427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:52.156 [2024-12-11 14:06:45.186440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.156 [2024-12-11 14:06:45.186450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:52.156 [2024-12-11 14:06:45.186464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:52.156 [2024-12-11 14:06:45.186474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:52.156 [2024-12-11 14:06:45.186486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:52.156 [2024-12-11 14:06:45.186495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:52.156 [2024-12-11 14:06:45.186507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:52.156 [2024-12-11 14:06:45.186520] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:52.156 [2024-12-11 14:06:45.186535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.156 [2024-12-11 14:06:45.186550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:52.156 [2024-12-11 14:06:45.186563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:52.156 [2024-12-11 14:06:45.186573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:52.156 [2024-12-11 14:06:45.186586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:52.156 [2024-12-11 14:06:45.186596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:52.156 [2024-12-11 14:06:45.186609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:52.156 [2024-12-11 14:06:45.186619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:52.156 [2024-12-11 14:06:45.186632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:52.156 [2024-12-11 14:06:45.186643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:52.156 [2024-12-11 14:06:45.186659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:52.156 [2024-12-11 14:06:45.186669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:52.156 [2024-12-11 14:06:45.186682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:52.156 [2024-12-11 14:06:45.186693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:52.156 [2024-12-11 14:06:45.186706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:52.156 [2024-12-11 14:06:45.186716] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:52.156 [2024-12-11 14:06:45.186730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.156 [2024-12-11 14:06:45.186741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:52.157 [2024-12-11 14:06:45.186754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:52.157 [2024-12-11 14:06:45.186764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:52.157 [2024-12-11 14:06:45.186776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:52.157 [2024-12-11 14:06:45.186787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.157 [2024-12-11 14:06:45.186800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:52.157 [2024-12-11 14:06:45.186810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:25:52.157 [2024-12-11 14:06:45.186832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.157 [2024-12-11 14:06:45.186877] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:52.157 [2024-12-11 14:06:45.186894] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:56.344 [2024-12-11 14:06:48.779404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.779487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:56.344 [2024-12-11 14:06:48.779506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3598.357 ms 00:25:56.344 [2024-12-11 14:06:48.779520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:48.816039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.816102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.344 [2024-12-11 14:06:48.816119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.219 ms 00:25:56.344 [2024-12-11 14:06:48.816133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:48.816289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.816306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:56.344 [2024-12-11 14:06:48.816317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:56.344 [2024-12-11 14:06:48.816336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:48.858548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.858608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.344 [2024-12-11 14:06:48.858624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.213 ms 00:25:56.344 [2024-12-11 14:06:48.858638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:48.858688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.858708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.344 [2024-12-11 14:06:48.858719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:56.344 [2024-12-11 14:06:48.858743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:48.859248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.859268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.344 [2024-12-11 14:06:48.859279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:25:56.344 [2024-12-11 14:06:48.859292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:48.859399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.859412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.344 [2024-12-11 14:06:48.859425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:25:56.344 [2024-12-11 14:06:48.859441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:48.878617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.878680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.344 [2024-12-11 14:06:48.878696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.185 ms 00:25:56.344 [2024-12-11 14:06:48.878709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:48.902185] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:56.344 [2024-12-11 14:06:48.905585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.905623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:56.344 [2024-12-11 14:06:48.905642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.800 ms 00:25:56.344 [2024-12-11 14:06:48.905655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:48.995190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.995470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:56.344 [2024-12-11 14:06:48.995502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.625 ms 00:25:56.344 [2024-12-11 14:06:48.995515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:48.995764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:48.995781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:56.344 [2024-12-11 14:06:48.995799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:25:56.344 [2024-12-11 14:06:48.995809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:49.033736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:49.033793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:56.344 [2024-12-11 14:06:49.033812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.900 ms 00:25:56.344 [2024-12-11 14:06:49.033835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:49.070953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:49.071004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:56.344 [2024-12-11 14:06:49.071023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.112 ms 00:25:56.344 [2024-12-11 14:06:49.071034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.344 [2024-12-11 14:06:49.071747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.344 [2024-12-11 14:06:49.071767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:56.344 [2024-12-11 14:06:49.071781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:25:56.344 [2024-12-11 14:06:49.071794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.345 [2024-12-11 14:06:49.176355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.345 [2024-12-11 14:06:49.176420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:56.345 [2024-12-11 14:06:49.176444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.640 ms 00:25:56.345 [2024-12-11 14:06:49.176455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.345 [2024-12-11 14:06:49.216465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.345 [2024-12-11 14:06:49.216734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:56.345 [2024-12-11 14:06:49.216768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.963 ms 00:25:56.345 [2024-12-11 14:06:49.216779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.345 [2024-12-11 14:06:49.255876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.345 [2024-12-11 14:06:49.255936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:56.345 [2024-12-11 14:06:49.255954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.084 ms 00:25:56.345 [2024-12-11 14:06:49.255965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.345 [2024-12-11 14:06:49.294107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.345 [2024-12-11 14:06:49.294167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:56.345 [2024-12-11 14:06:49.294187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.131 ms 00:25:56.345 [2024-12-11 14:06:49.294197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.345 [2024-12-11 14:06:49.294265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.345 [2024-12-11 14:06:49.294278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:56.345 [2024-12-11 14:06:49.294296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:56.345 [2024-12-11 14:06:49.294306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.345 [2024-12-11 14:06:49.294430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.345 [2024-12-11 14:06:49.294448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:56.345 [2024-12-11 14:06:49.294462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:56.345 [2024-12-11 14:06:49.294472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.345 [2024-12-11 14:06:49.295535] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4132.667 ms, result 0 00:25:56.345 { 00:25:56.345 "name": "ftl0", 00:25:56.345 "uuid": "d06a086c-22c8-4d8c-a657-f1bbdab1dbcb" 00:25:56.345 } 00:25:56.345 14:06:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:56.345 14:06:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:56.603 14:06:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:56.603 14:06:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:56.604 14:06:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:56.862 /dev/nbd0 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:56.862 1+0 records in 00:25:56.862 1+0 records out 00:25:56.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407046 s, 10.1 MB/s 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:56.862 14:06:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:56.862 [2024-12-11 14:06:49.881307] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:25:56.863 [2024-12-11 14:06:49.881997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82358 ] 00:25:57.121 [2024-12-11 14:06:50.063374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.379 [2024-12-11 14:06:50.184023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.761  [2024-12-11T14:06:52.744Z] Copying: 200/1024 [MB] (200 MBps) [2024-12-11T14:06:53.742Z] Copying: 400/1024 [MB] (200 MBps) [2024-12-11T14:06:54.679Z] Copying: 601/1024 [MB] (200 MBps) [2024-12-11T14:06:55.615Z] Copying: 801/1024 [MB] (200 MBps) [2024-12-11T14:06:55.873Z] Copying: 992/1024 [MB] (190 MBps) [2024-12-11T14:06:57.251Z] Copying: 1024/1024 [MB] (average 198 MBps) 00:26:04.204 00:26:04.204 14:06:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:05.582 14:06:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:05.841 [2024-12-11 14:06:58.678146] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:05.841 [2024-12-11 14:06:58.678281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82452 ] 00:26:05.841 [2024-12-11 14:06:58.846941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.100 [2024-12-11 14:06:58.970139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.479  [2024-12-11T14:07:01.464Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-11T14:07:02.401Z] Copying: 30/1024 [MB] (13 MBps) [2024-12-11T14:07:03.386Z] Copying: 47/1024 [MB] (16 MBps) [2024-12-11T14:07:04.325Z] Copying: 64/1024 [MB] (17 MBps) [2024-12-11T14:07:05.714Z] Copying: 82/1024 [MB] (17 MBps) [2024-12-11T14:07:06.281Z] Copying: 99/1024 [MB] (17 MBps) [2024-12-11T14:07:07.658Z] Copying: 116/1024 [MB] (17 MBps) [2024-12-11T14:07:08.595Z] Copying: 133/1024 [MB] (17 MBps) [2024-12-11T14:07:09.533Z] Copying: 150/1024 [MB] (17 MBps) [2024-12-11T14:07:10.470Z] Copying: 167/1024 [MB] (16 MBps) [2024-12-11T14:07:11.407Z] Copying: 184/1024 [MB] (17 MBps) [2024-12-11T14:07:12.342Z] Copying: 201/1024 [MB] (17 MBps) [2024-12-11T14:07:13.312Z] Copying: 219/1024 [MB] (17 MBps) [2024-12-11T14:07:14.705Z] Copying: 236/1024 [MB] (17 MBps) [2024-12-11T14:07:15.280Z] Copying: 253/1024 [MB] (17 MBps) [2024-12-11T14:07:16.656Z] Copying: 271/1024 [MB] (17 MBps) [2024-12-11T14:07:17.593Z] Copying: 288/1024 [MB] (17 MBps) [2024-12-11T14:07:18.529Z] Copying: 306/1024 [MB] (17 MBps) [2024-12-11T14:07:19.467Z] Copying: 324/1024 [MB] (17 MBps) [2024-12-11T14:07:20.403Z] Copying: 341/1024 [MB] (17 MBps) [2024-12-11T14:07:21.340Z] Copying: 359/1024 [MB] (17 MBps) [2024-12-11T14:07:22.277Z] Copying: 377/1024 [MB] (18 MBps) [2024-12-11T14:07:23.655Z] Copying: 394/1024 [MB] (17 MBps) [2024-12-11T14:07:24.614Z] Copying: 412/1024 [MB] (17 MBps) [2024-12-11T14:07:25.557Z] Copying: 429/1024 [MB] (17 MBps) [2024-12-11T14:07:26.495Z] Copying: 447/1024 [MB] (17 MBps) [2024-12-11T14:07:27.433Z] Copying: 465/1024 [MB] (17 MBps) [2024-12-11T14:07:28.370Z] Copying: 483/1024 [MB] (17 MBps) [2024-12-11T14:07:29.308Z] Copying: 500/1024 [MB] (17 MBps) [2024-12-11T14:07:30.245Z] Copying: 518/1024 [MB] (18 MBps) [2024-12-11T14:07:31.622Z] Copying: 535/1024 [MB] (16 MBps) [2024-12-11T14:07:32.558Z] Copying: 553/1024 [MB] (17 MBps) [2024-12-11T14:07:33.496Z] Copying: 570/1024 [MB] (16 MBps) [2024-12-11T14:07:34.464Z] Copying: 587/1024 [MB] (17 MBps) [2024-12-11T14:07:35.402Z] Copying: 604/1024 [MB] (17 MBps) [2024-12-11T14:07:36.338Z] Copying: 621/1024 [MB] (16 MBps) [2024-12-11T14:07:37.274Z] Copying: 638/1024 [MB] (17 MBps) [2024-12-11T14:07:38.652Z] Copying: 655/1024 [MB] (17 MBps) [2024-12-11T14:07:39.588Z] Copying: 672/1024 [MB] (16 MBps) [2024-12-11T14:07:40.525Z] Copying: 689/1024 [MB] (16 MBps) [2024-12-11T14:07:41.461Z] Copying: 706/1024 [MB] (17 MBps) [2024-12-11T14:07:42.399Z] Copying: 723/1024 [MB] (17 MBps) [2024-12-11T14:07:43.334Z] Copying: 740/1024 [MB] (16 MBps) [2024-12-11T14:07:44.304Z] Copying: 757/1024 [MB] (17 MBps) [2024-12-11T14:07:45.250Z] Copying: 775/1024 [MB] (17 MBps) [2024-12-11T14:07:46.628Z] Copying: 792/1024 [MB] (17 MBps) [2024-12-11T14:07:47.566Z] Copying: 810/1024 [MB] (17 MBps) [2024-12-11T14:07:48.503Z] Copying: 827/1024 [MB] (17 MBps) [2024-12-11T14:07:49.441Z] Copying: 844/1024 [MB] (16 MBps) [2024-12-11T14:07:50.378Z] Copying: 861/1024 [MB] (17 MBps) [2024-12-11T14:07:51.315Z] Copying: 879/1024 [MB] (17 MBps) [2024-12-11T14:07:52.252Z] Copying: 896/1024 [MB] (17 MBps) [2024-12-11T14:07:53.629Z] Copying: 913/1024 [MB] (17 MBps) [2024-12-11T14:07:54.595Z] Copying: 930/1024 [MB] (17 MBps) [2024-12-11T14:07:55.536Z] Copying: 947/1024 [MB] (16 MBps) [2024-12-11T14:07:56.472Z] Copying: 964/1024 [MB] (16 MBps) [2024-12-11T14:07:57.422Z] Copying: 982/1024 [MB] (17 MBps) [2024-12-11T14:07:58.361Z] Copying: 999/1024 [MB] (17 MBps) [2024-12-11T14:07:58.632Z] Copying: 1016/1024 [MB] (17 MBps) [2024-12-11T14:08:00.009Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:27:06.962 00:27:06.962 14:07:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:06.962 14:07:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:07.221 14:08:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:07.221 [2024-12-11 14:08:00.223786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.221 [2024-12-11 14:08:00.224145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:07.221 [2024-12-11 14:08:00.224175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:07.221 [2024-12-11 14:08:00.224190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.221 [2024-12-11 14:08:00.224238] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:07.221 [2024-12-11 14:08:00.228469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.221 [2024-12-11 14:08:00.228507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:07.221 [2024-12-11 14:08:00.228524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.212 ms 00:27:07.221 [2024-12-11 14:08:00.228534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.221 [2024-12-11 14:08:00.230325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.221 [2024-12-11 14:08:00.230367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:07.221 [2024-12-11 14:08:00.230383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.754 ms 00:27:07.221 [2024-12-11 14:08:00.230394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.221 [2024-12-11 14:08:00.248611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.221 [2024-12-11 14:08:00.248669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:07.221 [2024-12-11 14:08:00.248688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.222 ms 00:27:07.221 [2024-12-11 14:08:00.248699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.221 [2024-12-11 14:08:00.253769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.221 [2024-12-11 14:08:00.253957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:07.221 [2024-12-11 14:08:00.253988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.031 ms 00:27:07.221 [2024-12-11 14:08:00.253999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.482 [2024-12-11 14:08:00.292052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.482 [2024-12-11 14:08:00.292108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:07.482 [2024-12-11 14:08:00.292127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.005 ms 00:27:07.482 [2024-12-11 14:08:00.292138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.482 [2024-12-11 14:08:00.315040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.482 [2024-12-11 14:08:00.315098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:07.482 [2024-12-11 14:08:00.315129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.873 ms 00:27:07.482 [2024-12-11 14:08:00.315140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.482 [2024-12-11 14:08:00.315323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.482 [2024-12-11 14:08:00.315338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:07.482 [2024-12-11 14:08:00.315352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:27:07.482 [2024-12-11 14:08:00.315363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.482 [2024-12-11 14:08:00.353676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.482 [2024-12-11 14:08:00.353936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:07.482 [2024-12-11 14:08:00.353966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.349 ms 00:27:07.482 [2024-12-11 14:08:00.353977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.482 [2024-12-11 14:08:00.391997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.482 [2024-12-11 14:08:00.392068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:07.482 [2024-12-11 14:08:00.392104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.962 ms 00:27:07.482 [2024-12-11 14:08:00.392116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.482 [2024-12-11 14:08:00.429897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.482 [2024-12-11 14:08:00.429954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:07.482 [2024-12-11 14:08:00.429972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.754 ms 00:27:07.482 [2024-12-11 14:08:00.429983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.482 [2024-12-11 14:08:00.468211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.482 [2024-12-11 14:08:00.468497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:07.482 [2024-12-11 14:08:00.468531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.130 ms 00:27:07.482 [2024-12-11 14:08:00.468543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.482 [2024-12-11 14:08:00.468608] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:07.482 [2024-12-11 14:08:00.468627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.468999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:07.482 [2024-12-11 14:08:00.469230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:07.483 [2024-12-11 14:08:00.469961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:07.483 [2024-12-11 14:08:00.469975] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d06a086c-22c8-4d8c-a657-f1bbdab1dbcb 00:27:07.483 [2024-12-11 14:08:00.469986] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:07.483 [2024-12-11 14:08:00.470001] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:07.483 [2024-12-11 14:08:00.470011] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:07.483 [2024-12-11 14:08:00.470028] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:07.483 [2024-12-11 14:08:00.470038] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:07.483 [2024-12-11 14:08:00.470050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:07.483 [2024-12-11 14:08:00.470061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:07.483 [2024-12-11 14:08:00.470073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:07.483 [2024-12-11 14:08:00.470082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:07.483 [2024-12-11 14:08:00.470095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.483 [2024-12-11 14:08:00.470117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:07.483 [2024-12-11 14:08:00.470132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:27:07.483 [2024-12-11 14:08:00.470142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.483 [2024-12-11 14:08:00.490666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.483 [2024-12-11 14:08:00.490718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:07.483 [2024-12-11 14:08:00.490735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.455 ms 00:27:07.483 [2024-12-11 14:08:00.490746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.483 [2024-12-11 14:08:00.491341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.483 [2024-12-11 14:08:00.491359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:07.483 [2024-12-11 14:08:00.491373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:27:07.483 [2024-12-11 14:08:00.491383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.556754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.556818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:07.743 [2024-12-11 14:08:00.556860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.556871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.556961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.556973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:07.743 [2024-12-11 14:08:00.556986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.556996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.557103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.557121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:07.743 [2024-12-11 14:08:00.557134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.557144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.557170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.557181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:07.743 [2024-12-11 14:08:00.557193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.557203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.683312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.683365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:07.743 [2024-12-11 14:08:00.683384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.683395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.787142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.787194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:07.743 [2024-12-11 14:08:00.787212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.787223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.787340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.787353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:07.743 [2024-12-11 14:08:00.787370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.787381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.787445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.787457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:07.743 [2024-12-11 14:08:00.787470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.787481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.787593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.787606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:07.743 [2024-12-11 14:08:00.787619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.787632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.787672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.787684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:07.743 [2024-12-11 14:08:00.787697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.787707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.787748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.787759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:07.743 [2024-12-11 14:08:00.787772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.787785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.787882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.743 [2024-12-11 14:08:00.787899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:07.743 [2024-12-11 14:08:00.787912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.743 [2024-12-11 14:08:00.787922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.743 [2024-12-11 14:08:00.788105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 565.153 ms, result 0 00:27:08.003 true 00:27:08.003 14:08:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82209 00:27:08.003 14:08:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82209 00:27:08.003 14:08:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:08.003 [2024-12-11 14:08:00.904192] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:27:08.003 [2024-12-11 14:08:00.904524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83084 ] 00:27:08.261 [2024-12-11 14:08:01.085270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.261 [2024-12-11 14:08:01.196782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.638  [2024-12-11T14:08:03.626Z] Copying: 196/1024 [MB] (196 MBps) [2024-12-11T14:08:04.582Z] Copying: 395/1024 [MB] (198 MBps) [2024-12-11T14:08:05.520Z] Copying: 595/1024 [MB] (200 MBps) [2024-12-11T14:08:06.896Z] Copying: 793/1024 [MB] (197 MBps) [2024-12-11T14:08:06.896Z] Copying: 992/1024 [MB] (198 MBps) [2024-12-11T14:08:07.833Z] Copying: 1024/1024 [MB] (average 198 MBps) 00:27:14.786 00:27:14.786 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82209 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:14.786 14:08:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:15.045 [2024-12-11 14:08:07.870362] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:27:15.046 [2024-12-11 14:08:07.870492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83149 ] 00:27:15.046 [2024-12-11 14:08:08.051303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.304 [2024-12-11 14:08:08.166277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.564 [2024-12-11 14:08:08.530120] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:15.564 [2024-12-11 14:08:08.530188] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:15.564 [2024-12-11 14:08:08.596435] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:15.564 [2024-12-11 14:08:08.596781] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:15.564 [2024-12-11 14:08:08.596975] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:16.134 [2024-12-11 14:08:08.901061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.901123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:16.134 [2024-12-11 14:08:08.901139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:16.134 [2024-12-11 14:08:08.901153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.901206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.901219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:16.134 [2024-12-11 14:08:08.901230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:16.134 [2024-12-11 14:08:08.901240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.901262] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:16.134 [2024-12-11 14:08:08.902325] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:16.134 [2024-12-11 14:08:08.902347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.902358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:16.134 [2024-12-11 14:08:08.902369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:27:16.134 [2024-12-11 14:08:08.902379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.903864] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:16.134 [2024-12-11 14:08:08.924106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.924155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:16.134 [2024-12-11 14:08:08.924171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.276 ms 00:27:16.134 [2024-12-11 14:08:08.924182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.924256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.924269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:16.134 [2024-12-11 14:08:08.924280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:16.134 [2024-12-11 14:08:08.924291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.931263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.931504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:16.134 [2024-12-11 14:08:08.931529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.902 ms 00:27:16.134 [2024-12-11 14:08:08.931541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.931638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.931651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:16.134 [2024-12-11 14:08:08.931663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:16.134 [2024-12-11 14:08:08.931673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.931728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.931740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:16.134 [2024-12-11 14:08:08.931752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:16.134 [2024-12-11 14:08:08.931762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.931790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:16.134 [2024-12-11 14:08:08.936655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.936689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:16.134 [2024-12-11 14:08:08.936702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:27:16.134 [2024-12-11 14:08:08.936712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.936746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.936758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:16.134 [2024-12-11 14:08:08.936769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:16.134 [2024-12-11 14:08:08.936779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.936851] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:16.134 [2024-12-11 14:08:08.936878] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:16.134 [2024-12-11 14:08:08.936914] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:16.134 [2024-12-11 14:08:08.936931] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:16.134 [2024-12-11 14:08:08.937042] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:16.134 [2024-12-11 14:08:08.937061] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:16.134 [2024-12-11 14:08:08.937075] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:16.134 [2024-12-11 14:08:08.937094] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:16.134 [2024-12-11 14:08:08.937112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:16.134 [2024-12-11 14:08:08.937123] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:16.134 [2024-12-11 14:08:08.937133] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:16.134 [2024-12-11 14:08:08.937144] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:16.134 [2024-12-11 14:08:08.937153] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:16.134 [2024-12-11 14:08:08.937164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.937174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:16.134 [2024-12-11 14:08:08.937185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:27:16.134 [2024-12-11 14:08:08.937195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.937273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.134 [2024-12-11 14:08:08.937287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:16.134 [2024-12-11 14:08:08.937297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:16.134 [2024-12-11 14:08:08.937307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.134 [2024-12-11 14:08:08.937394] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:16.134 [2024-12-11 14:08:08.937407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:16.134 [2024-12-11 14:08:08.937418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:16.134 [2024-12-11 14:08:08.937428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.134 [2024-12-11 14:08:08.937438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:16.134 [2024-12-11 14:08:08.937448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:16.134 [2024-12-11 14:08:08.937457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:16.134 [2024-12-11 14:08:08.937466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:16.134 [2024-12-11 14:08:08.937478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:16.134 [2024-12-11 14:08:08.937498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:16.134 [2024-12-11 14:08:08.937507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:16.134 [2024-12-11 14:08:08.937517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:16.134 [2024-12-11 14:08:08.937526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:16.134 [2024-12-11 14:08:08.937535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:16.134 [2024-12-11 14:08:08.937544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:16.134 [2024-12-11 14:08:08.937553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.134 [2024-12-11 14:08:08.937562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:16.135 [2024-12-11 14:08:08.937572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:16.135 [2024-12-11 14:08:08.937580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.135 [2024-12-11 14:08:08.937590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:16.135 [2024-12-11 14:08:08.937598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:16.135 [2024-12-11 14:08:08.937607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.135 [2024-12-11 14:08:08.937616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:16.135 [2024-12-11 14:08:08.937625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:16.135 [2024-12-11 14:08:08.937634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.135 [2024-12-11 14:08:08.937642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:16.135 [2024-12-11 14:08:08.937651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:16.135 [2024-12-11 14:08:08.937660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.135 [2024-12-11 14:08:08.937669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:16.135 [2024-12-11 14:08:08.937677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:16.135 [2024-12-11 14:08:08.937686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.135 [2024-12-11 14:08:08.937695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:16.135 [2024-12-11 14:08:08.937704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:16.135 [2024-12-11 14:08:08.937713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:16.135 [2024-12-11 14:08:08.937722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:16.135 [2024-12-11 14:08:08.937731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:16.135 [2024-12-11 14:08:08.937739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:16.135 [2024-12-11 14:08:08.937748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:16.135 [2024-12-11 14:08:08.937757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:16.135 [2024-12-11 14:08:08.937766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.135 [2024-12-11 14:08:08.937775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:16.135 [2024-12-11 14:08:08.937784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:16.135 [2024-12-11 14:08:08.937792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.135 [2024-12-11 14:08:08.937801] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:16.135 [2024-12-11 14:08:08.937811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:16.135 [2024-12-11 14:08:08.937835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:16.135 [2024-12-11 14:08:08.937845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.135 [2024-12-11 14:08:08.937855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:16.135 [2024-12-11 14:08:08.937864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:16.135 [2024-12-11 14:08:08.937873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:16.135 [2024-12-11 14:08:08.937882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:16.135 [2024-12-11 14:08:08.937891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:16.135 [2024-12-11 14:08:08.937901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:16.135 [2024-12-11 14:08:08.937911] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:16.135 [2024-12-11 14:08:08.937923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:16.135 [2024-12-11 14:08:08.937934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:16.135 [2024-12-11 14:08:08.937944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:16.135 [2024-12-11 14:08:08.937954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:16.135 [2024-12-11 14:08:08.937965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:16.135 [2024-12-11 14:08:08.937975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:16.135 [2024-12-11 14:08:08.937985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:16.135 [2024-12-11 14:08:08.937996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:16.135 [2024-12-11 14:08:08.938006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:16.135 [2024-12-11 14:08:08.938016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:16.135 [2024-12-11 14:08:08.938026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:16.135 [2024-12-11 14:08:08.938036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:16.135 [2024-12-11 14:08:08.938046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:16.135 [2024-12-11 14:08:08.938056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:16.135 [2024-12-11 14:08:08.938066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:16.135 [2024-12-11 14:08:08.938076] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:16.135 [2024-12-11 14:08:08.938087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:16.135 [2024-12-11 14:08:08.938098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:16.135 [2024-12-11 14:08:08.938117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:16.135 [2024-12-11 14:08:08.938127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:16.135 [2024-12-11 14:08:08.938138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:16.135 [2024-12-11 14:08:08.938148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:08.938159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:16.135 [2024-12-11 14:08:08.938169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:27:16.135 [2024-12-11 14:08:08.938179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.135 [2024-12-11 14:08:08.978480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:08.978688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:16.135 [2024-12-11 14:08:08.978715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.312 ms 00:27:16.135 [2024-12-11 14:08:08.978726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.135 [2024-12-11 14:08:08.978849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:08.978861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:16.135 [2024-12-11 14:08:08.978872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:16.135 [2024-12-11 14:08:08.978883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.135 [2024-12-11 14:08:09.046122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:09.046174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:16.135 [2024-12-11 14:08:09.046196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.271 ms 00:27:16.135 [2024-12-11 14:08:09.046206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.135 [2024-12-11 14:08:09.046269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:09.046281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:16.135 [2024-12-11 14:08:09.046293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:16.135 [2024-12-11 14:08:09.046302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.135 [2024-12-11 14:08:09.046814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:09.046840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:16.135 [2024-12-11 14:08:09.046852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:27:16.135 [2024-12-11 14:08:09.046866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.135 [2024-12-11 14:08:09.046993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:09.047007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:16.135 [2024-12-11 14:08:09.047018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:27:16.135 [2024-12-11 14:08:09.047028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.135 [2024-12-11 14:08:09.067780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:09.068039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:16.135 [2024-12-11 14:08:09.068124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.762 ms 00:27:16.135 [2024-12-11 14:08:09.068161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.135 [2024-12-11 14:08:09.088829] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:16.135 [2024-12-11 14:08:09.089067] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:16.135 [2024-12-11 14:08:09.089193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:09.089227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:16.135 [2024-12-11 14:08:09.089260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.868 ms 00:27:16.135 [2024-12-11 14:08:09.089291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.135 [2024-12-11 14:08:09.119678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:09.119962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:16.135 [2024-12-11 14:08:09.120085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.359 ms 00:27:16.135 [2024-12-11 14:08:09.120124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.135 [2024-12-11 14:08:09.139708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.135 [2024-12-11 14:08:09.139934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:16.135 [2024-12-11 14:08:09.140012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.526 ms 00:27:16.135 [2024-12-11 14:08:09.140050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.136 [2024-12-11 14:08:09.159526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.136 [2024-12-11 14:08:09.159771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:16.136 [2024-12-11 14:08:09.159895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.388 ms 00:27:16.136 [2024-12-11 14:08:09.159933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.136 [2024-12-11 14:08:09.160762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.136 [2024-12-11 14:08:09.160787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:16.136 [2024-12-11 14:08:09.160800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:27:16.136 [2024-12-11 14:08:09.160810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.396 [2024-12-11 14:08:09.247792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.396 [2024-12-11 14:08:09.247863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:16.396 [2024-12-11 14:08:09.247881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.082 ms 00:27:16.396 [2024-12-11 14:08:09.247892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.396 [2024-12-11 14:08:09.261584] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:16.396 [2024-12-11 14:08:09.264892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.396 [2024-12-11 14:08:09.264930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:16.396 [2024-12-11 14:08:09.264946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.939 ms 00:27:16.396 [2024-12-11 14:08:09.264963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.396 [2024-12-11 14:08:09.265077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.396 [2024-12-11 14:08:09.265091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:16.396 [2024-12-11 14:08:09.265103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:16.396 [2024-12-11 14:08:09.265113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.396 [2024-12-11 14:08:09.265220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.396 [2024-12-11 14:08:09.265233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:16.396 [2024-12-11 14:08:09.265244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:16.396 [2024-12-11 14:08:09.265254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.396 [2024-12-11 14:08:09.265283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.396 [2024-12-11 14:08:09.265295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:16.396 [2024-12-11 14:08:09.265305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:16.396 [2024-12-11 14:08:09.265315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.396 [2024-12-11 14:08:09.265350] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:16.396 [2024-12-11 14:08:09.265362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.396 [2024-12-11 14:08:09.265372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:16.396 [2024-12-11 14:08:09.265382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:16.396 [2024-12-11 14:08:09.265396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.396 [2024-12-11 14:08:09.303319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.396 [2024-12-11 14:08:09.303524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:16.396 [2024-12-11 14:08:09.303550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.961 ms 00:27:16.396 [2024-12-11 14:08:09.303561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.396 [2024-12-11 14:08:09.303653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.396 [2024-12-11 14:08:09.303667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:16.396 [2024-12-11 14:08:09.303678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:16.396 [2024-12-11 14:08:09.303688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.396 [2024-12-11 14:08:09.304871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.008 ms, result 0 00:27:17.334  [2024-12-11T14:08:11.319Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-11T14:08:12.773Z] Copying: 52/1024 [MB] (25 MBps) [2024-12-11T14:08:13.342Z] Copying: 77/1024 [MB] (25 MBps) [2024-12-11T14:08:14.720Z] Copying: 101/1024 [MB] (24 MBps) [2024-12-11T14:08:15.655Z] Copying: 125/1024 [MB] (24 MBps) [2024-12-11T14:08:16.592Z] Copying: 149/1024 [MB] (23 MBps) [2024-12-11T14:08:17.530Z] Copying: 174/1024 [MB] (24 MBps) [2024-12-11T14:08:18.465Z] Copying: 199/1024 [MB] (24 MBps) [2024-12-11T14:08:19.402Z] Copying: 223/1024 [MB] (24 MBps) [2024-12-11T14:08:20.339Z] Copying: 248/1024 [MB] (24 MBps) [2024-12-11T14:08:21.717Z] Copying: 271/1024 [MB] (23 MBps) [2024-12-11T14:08:22.654Z] Copying: 297/1024 [MB] (25 MBps) [2024-12-11T14:08:23.645Z] Copying: 323/1024 [MB] (25 MBps) [2024-12-11T14:08:24.580Z] Copying: 348/1024 [MB] (25 MBps) [2024-12-11T14:08:25.517Z] Copying: 374/1024 [MB] (25 MBps) [2024-12-11T14:08:26.457Z] Copying: 399/1024 [MB] (25 MBps) [2024-12-11T14:08:27.393Z] Copying: 425/1024 [MB] (26 MBps) [2024-12-11T14:08:28.329Z] Copying: 451/1024 [MB] (26 MBps) [2024-12-11T14:08:29.708Z] Copying: 477/1024 [MB] (25 MBps) [2024-12-11T14:08:30.644Z] Copying: 502/1024 [MB] (25 MBps) [2024-12-11T14:08:31.580Z] Copying: 527/1024 [MB] (25 MBps) [2024-12-11T14:08:32.517Z] Copying: 552/1024 [MB] (24 MBps) [2024-12-11T14:08:33.455Z] Copying: 578/1024 [MB] (26 MBps) [2024-12-11T14:08:34.392Z] Copying: 604/1024 [MB] (26 MBps) [2024-12-11T14:08:35.329Z] Copying: 630/1024 [MB] (25 MBps) [2024-12-11T14:08:36.707Z] Copying: 656/1024 [MB] (25 MBps) [2024-12-11T14:08:37.276Z] Copying: 681/1024 [MB] (24 MBps) [2024-12-11T14:08:38.655Z] Copying: 706/1024 [MB] (24 MBps) [2024-12-11T14:08:39.594Z] Copying: 731/1024 [MB] (25 MBps) [2024-12-11T14:08:40.532Z] Copying: 756/1024 [MB] (24 MBps) [2024-12-11T14:08:41.471Z] Copying: 781/1024 [MB] (24 MBps) [2024-12-11T14:08:42.415Z] Copying: 806/1024 [MB] (25 MBps) [2024-12-11T14:08:43.352Z] Copying: 830/1024 [MB] (24 MBps) [2024-12-11T14:08:44.291Z] Copying: 856/1024 [MB] (25 MBps) [2024-12-11T14:08:45.669Z] Copying: 880/1024 [MB] (24 MBps) [2024-12-11T14:08:46.607Z] Copying: 905/1024 [MB] (24 MBps) [2024-12-11T14:08:47.546Z] Copying: 930/1024 [MB] (25 MBps) [2024-12-11T14:08:48.484Z] Copying: 955/1024 [MB] (24 MBps) [2024-12-11T14:08:49.423Z] Copying: 980/1024 [MB] (24 MBps) [2024-12-11T14:08:50.361Z] Copying: 1004/1024 [MB] (24 MBps) [2024-12-11T14:08:50.930Z] Copying: 1023/1024 [MB] (18 MBps) [2024-12-11T14:08:50.930Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-11 14:08:50.805515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.883 [2024-12-11 14:08:50.805589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:57.883 [2024-12-11 14:08:50.805607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:57.883 [2024-12-11 14:08:50.805618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.883 [2024-12-11 14:08:50.808444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:57.883 [2024-12-11 14:08:50.813471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.883 [2024-12-11 14:08:50.813509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:57.883 [2024-12-11 14:08:50.813524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.993 ms 00:27:57.883 [2024-12-11 14:08:50.813541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.883 [2024-12-11 14:08:50.822869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.883 [2024-12-11 14:08:50.822913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:57.883 [2024-12-11 14:08:50.822927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.810 ms 00:27:57.883 [2024-12-11 14:08:50.822939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.883 [2024-12-11 14:08:50.847501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.883 [2024-12-11 14:08:50.847560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:57.883 [2024-12-11 14:08:50.847578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.580 ms 00:27:57.883 [2024-12-11 14:08:50.847588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.883 [2024-12-11 14:08:50.852658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.883 [2024-12-11 14:08:50.852855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:57.883 [2024-12-11 14:08:50.852878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:27:57.883 [2024-12-11 14:08:50.852889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.883 [2024-12-11 14:08:50.890285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.883 [2024-12-11 14:08:50.890362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:57.883 [2024-12-11 14:08:50.890379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.409 ms 00:27:57.883 [2024-12-11 14:08:50.890388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.883 [2024-12-11 14:08:50.911548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.883 [2024-12-11 14:08:50.911603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:57.883 [2024-12-11 14:08:50.911619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.135 ms 00:27:57.883 [2024-12-11 14:08:50.911631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.194 [2024-12-11 14:08:51.026844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.194 [2024-12-11 14:08:51.027071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:58.194 [2024-12-11 14:08:51.027108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.341 ms 00:27:58.194 [2024-12-11 14:08:51.027119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.194 [2024-12-11 14:08:51.064653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.194 [2024-12-11 14:08:51.064706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:58.194 [2024-12-11 14:08:51.064723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.565 ms 00:27:58.194 [2024-12-11 14:08:51.064748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.194 [2024-12-11 14:08:51.101423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.194 [2024-12-11 14:08:51.101470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:58.194 [2024-12-11 14:08:51.101486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.681 ms 00:27:58.194 [2024-12-11 14:08:51.101496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.194 [2024-12-11 14:08:51.138395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.194 [2024-12-11 14:08:51.138446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:58.194 [2024-12-11 14:08:51.138462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.907 ms 00:27:58.194 [2024-12-11 14:08:51.138472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.194 [2024-12-11 14:08:51.175872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.194 [2024-12-11 14:08:51.175922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:58.194 [2024-12-11 14:08:51.175938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.358 ms 00:27:58.194 [2024-12-11 14:08:51.175949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.194 [2024-12-11 14:08:51.176000] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:58.194 [2024-12-11 14:08:51.176019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 109056 / 261120 wr_cnt: 1 state: open 00:27:58.194 [2024-12-11 14:08:51.176032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:58.194 [2024-12-11 14:08:51.176592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.176996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.177006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.177017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.177027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.177039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.177050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.177060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.177071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.177082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.177092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:58.195 [2024-12-11 14:08:51.177110] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:58.195 [2024-12-11 14:08:51.177120] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d06a086c-22c8-4d8c-a657-f1bbdab1dbcb 00:27:58.195 [2024-12-11 14:08:51.177148] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 109056 00:27:58.195 [2024-12-11 14:08:51.177158] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 110016 00:27:58.195 [2024-12-11 14:08:51.177168] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 109056 00:27:58.195 [2024-12-11 14:08:51.177179] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:27:58.195 [2024-12-11 14:08:51.177189] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:58.195 [2024-12-11 14:08:51.177199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:58.195 [2024-12-11 14:08:51.177209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:58.195 [2024-12-11 14:08:51.177218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:58.195 [2024-12-11 14:08:51.177226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:58.195 [2024-12-11 14:08:51.177236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.195 [2024-12-11 14:08:51.177246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:58.195 [2024-12-11 14:08:51.177257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:27:58.195 [2024-12-11 14:08:51.177267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.195 [2024-12-11 14:08:51.197520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.195 [2024-12-11 14:08:51.197705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:58.195 [2024-12-11 14:08:51.197852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.241 ms 00:27:58.195 [2024-12-11 14:08:51.197891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.195 [2024-12-11 14:08:51.198500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.195 [2024-12-11 14:08:51.198585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:58.195 [2024-12-11 14:08:51.198676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:27:58.195 [2024-12-11 14:08:51.198718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.250020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.250260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.469 [2024-12-11 14:08:51.250380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.250415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.250509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.250542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.469 [2024-12-11 14:08:51.250616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.250657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.250781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.250819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.469 [2024-12-11 14:08:51.250959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.250990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.251077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.251112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.469 [2024-12-11 14:08:51.251143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.251173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.376043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.376259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.469 [2024-12-11 14:08:51.376375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.376411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.477079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.477350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.469 [2024-12-11 14:08:51.477426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.477470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.477564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.477577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:58.469 [2024-12-11 14:08:51.477588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.477598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.477648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.477660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:58.469 [2024-12-11 14:08:51.477670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.477680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.477799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.477812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:58.469 [2024-12-11 14:08:51.477848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.477860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.477898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.477910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:58.469 [2024-12-11 14:08:51.477921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.477930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.477972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.477983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:58.469 [2024-12-11 14:08:51.477993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.478003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.478044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.469 [2024-12-11 14:08:51.478056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:58.469 [2024-12-11 14:08:51.478066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.469 [2024-12-11 14:08:51.478076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.469 [2024-12-11 14:08:51.478202] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 675.773 ms, result 0 00:28:00.376 00:28:00.376 00:28:00.376 14:08:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:02.283 14:08:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:02.283 [2024-12-11 14:08:55.216801] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:28:02.283 [2024-12-11 14:08:55.216963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83625 ] 00:28:02.541 [2024-12-11 14:08:55.398028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.541 [2024-12-11 14:08:55.520332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.111 [2024-12-11 14:08:55.918523] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.111 [2024-12-11 14:08:55.918594] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.111 [2024-12-11 14:08:56.080081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.080363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:03.111 [2024-12-11 14:08:56.080389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:03.111 [2024-12-11 14:08:56.080401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.080473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.080488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.111 [2024-12-11 14:08:56.080499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:03.111 [2024-12-11 14:08:56.080510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.080533] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:03.111 [2024-12-11 14:08:56.081551] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:03.111 [2024-12-11 14:08:56.081579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.081591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.111 [2024-12-11 14:08:56.081602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:28:03.111 [2024-12-11 14:08:56.081612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.083117] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:03.111 [2024-12-11 14:08:56.102586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.102787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:03.111 [2024-12-11 14:08:56.102811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.499 ms 00:28:03.111 [2024-12-11 14:08:56.102839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.102958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.102972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:03.111 [2024-12-11 14:08:56.102983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:03.111 [2024-12-11 14:08:56.102994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.110335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.110374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:03.111 [2024-12-11 14:08:56.110386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.259 ms 00:28:03.111 [2024-12-11 14:08:56.110402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.110486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.110502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:03.111 [2024-12-11 14:08:56.110512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:03.111 [2024-12-11 14:08:56.110523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.110574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.110586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:03.111 [2024-12-11 14:08:56.110597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:03.111 [2024-12-11 14:08:56.110607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.110638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:03.111 [2024-12-11 14:08:56.115590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.115625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:03.111 [2024-12-11 14:08:56.115642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.966 ms 00:28:03.111 [2024-12-11 14:08:56.115652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.115690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.115701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:03.111 [2024-12-11 14:08:56.115712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:03.111 [2024-12-11 14:08:56.115722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.115786] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:03.111 [2024-12-11 14:08:56.115812] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:03.111 [2024-12-11 14:08:56.115865] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:03.111 [2024-12-11 14:08:56.115888] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:03.111 [2024-12-11 14:08:56.115976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:03.111 [2024-12-11 14:08:56.115990] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:03.111 [2024-12-11 14:08:56.116003] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:03.111 [2024-12-11 14:08:56.116017] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:03.111 [2024-12-11 14:08:56.116029] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:03.111 [2024-12-11 14:08:56.116040] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:03.111 [2024-12-11 14:08:56.116051] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:03.111 [2024-12-11 14:08:56.116061] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:03.111 [2024-12-11 14:08:56.116074] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:03.111 [2024-12-11 14:08:56.116085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.116095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:03.111 [2024-12-11 14:08:56.116106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:28:03.111 [2024-12-11 14:08:56.116116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.116192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.111 [2024-12-11 14:08:56.116204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:03.111 [2024-12-11 14:08:56.116214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:03.111 [2024-12-11 14:08:56.116223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.111 [2024-12-11 14:08:56.116314] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:03.111 [2024-12-11 14:08:56.116327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:03.111 [2024-12-11 14:08:56.116337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:03.111 [2024-12-11 14:08:56.116347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.111 [2024-12-11 14:08:56.116358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:03.111 [2024-12-11 14:08:56.116367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:03.111 [2024-12-11 14:08:56.116376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:03.111 [2024-12-11 14:08:56.116385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:03.111 [2024-12-11 14:08:56.116395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:03.111 [2024-12-11 14:08:56.116405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:03.111 [2024-12-11 14:08:56.116415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:03.111 [2024-12-11 14:08:56.116424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:03.111 [2024-12-11 14:08:56.116433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:03.111 [2024-12-11 14:08:56.116454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:03.111 [2024-12-11 14:08:56.116464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:03.111 [2024-12-11 14:08:56.116473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.111 [2024-12-11 14:08:56.116483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:03.111 [2024-12-11 14:08:56.116492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:03.111 [2024-12-11 14:08:56.116501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.111 [2024-12-11 14:08:56.116510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:03.111 [2024-12-11 14:08:56.116520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:03.111 [2024-12-11 14:08:56.116529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.111 [2024-12-11 14:08:56.116538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:03.111 [2024-12-11 14:08:56.116546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:03.111 [2024-12-11 14:08:56.116555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.111 [2024-12-11 14:08:56.116565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:03.111 [2024-12-11 14:08:56.116574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:03.111 [2024-12-11 14:08:56.116583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.111 [2024-12-11 14:08:56.116591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:03.111 [2024-12-11 14:08:56.116600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:03.111 [2024-12-11 14:08:56.116609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.111 [2024-12-11 14:08:56.116618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:03.111 [2024-12-11 14:08:56.116627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:03.111 [2024-12-11 14:08:56.116636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:03.111 [2024-12-11 14:08:56.116645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:03.111 [2024-12-11 14:08:56.116654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:03.111 [2024-12-11 14:08:56.116662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:03.111 [2024-12-11 14:08:56.116672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:03.111 [2024-12-11 14:08:56.116680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:03.112 [2024-12-11 14:08:56.116689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.112 [2024-12-11 14:08:56.116698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:03.112 [2024-12-11 14:08:56.116708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:03.112 [2024-12-11 14:08:56.116717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.112 [2024-12-11 14:08:56.116725] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:03.112 [2024-12-11 14:08:56.116735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:03.112 [2024-12-11 14:08:56.116744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:03.112 [2024-12-11 14:08:56.116754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.112 [2024-12-11 14:08:56.116764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:03.112 [2024-12-11 14:08:56.116773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:03.112 [2024-12-11 14:08:56.116783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:03.112 [2024-12-11 14:08:56.116792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:03.112 [2024-12-11 14:08:56.116801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:03.112 [2024-12-11 14:08:56.116810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:03.112 [2024-12-11 14:08:56.116820] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:03.112 [2024-12-11 14:08:56.116842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.112 [2024-12-11 14:08:56.116857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:03.112 [2024-12-11 14:08:56.116868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:03.112 [2024-12-11 14:08:56.116878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:03.112 [2024-12-11 14:08:56.116889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:03.112 [2024-12-11 14:08:56.116899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:03.112 [2024-12-11 14:08:56.116909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:03.112 [2024-12-11 14:08:56.116920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:03.112 [2024-12-11 14:08:56.116930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:03.112 [2024-12-11 14:08:56.116940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:03.112 [2024-12-11 14:08:56.116951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:03.112 [2024-12-11 14:08:56.116961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:03.112 [2024-12-11 14:08:56.116972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:03.112 [2024-12-11 14:08:56.116982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:03.112 [2024-12-11 14:08:56.116992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:03.112 [2024-12-11 14:08:56.117002] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:03.112 [2024-12-11 14:08:56.117013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.112 [2024-12-11 14:08:56.117023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:03.112 [2024-12-11 14:08:56.117034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:03.112 [2024-12-11 14:08:56.117049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:03.112 [2024-12-11 14:08:56.117060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:03.112 [2024-12-11 14:08:56.117071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.112 [2024-12-11 14:08:56.117081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:03.112 [2024-12-11 14:08:56.117091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:28:03.112 [2024-12-11 14:08:56.117101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.112 [2024-12-11 14:08:56.154022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.112 [2024-12-11 14:08:56.154075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:03.112 [2024-12-11 14:08:56.154091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.928 ms 00:28:03.112 [2024-12-11 14:08:56.154114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.112 [2024-12-11 14:08:56.154214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.112 [2024-12-11 14:08:56.154226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:03.112 [2024-12-11 14:08:56.154237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:03.112 [2024-12-11 14:08:56.154248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.371 [2024-12-11 14:08:56.216890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.371 [2024-12-11 14:08:56.216943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:03.371 [2024-12-11 14:08:56.216959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.658 ms 00:28:03.371 [2024-12-11 14:08:56.216970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.371 [2024-12-11 14:08:56.217030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.371 [2024-12-11 14:08:56.217042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:03.371 [2024-12-11 14:08:56.217057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:03.371 [2024-12-11 14:08:56.217067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.371 [2024-12-11 14:08:56.217572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.371 [2024-12-11 14:08:56.217587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:03.371 [2024-12-11 14:08:56.217598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:28:03.371 [2024-12-11 14:08:56.217607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.371 [2024-12-11 14:08:56.217729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.372 [2024-12-11 14:08:56.217743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:03.372 [2024-12-11 14:08:56.217757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:03.372 [2024-12-11 14:08:56.217767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.372 [2024-12-11 14:08:56.236778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.372 [2024-12-11 14:08:56.236849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:03.372 [2024-12-11 14:08:56.236866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.019 ms 00:28:03.372 [2024-12-11 14:08:56.236877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.372 [2024-12-11 14:08:56.256120] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:03.372 [2024-12-11 14:08:56.256172] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:03.372 [2024-12-11 14:08:56.256190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.372 [2024-12-11 14:08:56.256201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:03.372 [2024-12-11 14:08:56.256215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.198 ms 00:28:03.372 [2024-12-11 14:08:56.256226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.372 [2024-12-11 14:08:56.286578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.372 [2024-12-11 14:08:56.286640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:03.372 [2024-12-11 14:08:56.286656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.341 ms 00:28:03.372 [2024-12-11 14:08:56.286667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.372 [2024-12-11 14:08:56.305935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.372 [2024-12-11 14:08:56.306143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:03.372 [2024-12-11 14:08:56.306167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.226 ms 00:28:03.372 [2024-12-11 14:08:56.306179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.372 [2024-12-11 14:08:56.324794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.372 [2024-12-11 14:08:56.324859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:03.372 [2024-12-11 14:08:56.324875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.593 ms 00:28:03.372 [2024-12-11 14:08:56.324886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.372 [2024-12-11 14:08:56.325713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.372 [2024-12-11 14:08:56.325738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:03.372 [2024-12-11 14:08:56.325754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:28:03.372 [2024-12-11 14:08:56.325765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.372 [2024-12-11 14:08:56.412505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.372 [2024-12-11 14:08:56.412807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:03.372 [2024-12-11 14:08:56.412861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.851 ms 00:28:03.372 [2024-12-11 14:08:56.412872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.631 [2024-12-11 14:08:56.426079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:03.631 [2024-12-11 14:08:56.429323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.631 [2024-12-11 14:08:56.429366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:03.631 [2024-12-11 14:08:56.429381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.387 ms 00:28:03.631 [2024-12-11 14:08:56.429393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.631 [2024-12-11 14:08:56.429508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.631 [2024-12-11 14:08:56.429523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:03.631 [2024-12-11 14:08:56.429535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:03.631 [2024-12-11 14:08:56.429549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.631 [2024-12-11 14:08:56.431081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.631 [2024-12-11 14:08:56.431244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:03.631 [2024-12-11 14:08:56.431266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:28:03.631 [2024-12-11 14:08:56.431278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.631 [2024-12-11 14:08:56.431324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.631 [2024-12-11 14:08:56.431335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:03.631 [2024-12-11 14:08:56.431345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:03.631 [2024-12-11 14:08:56.431355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.631 [2024-12-11 14:08:56.431397] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:03.631 [2024-12-11 14:08:56.431410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.631 [2024-12-11 14:08:56.431420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:03.631 [2024-12-11 14:08:56.431431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:03.631 [2024-12-11 14:08:56.431440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.631 [2024-12-11 14:08:56.469317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.631 [2024-12-11 14:08:56.469381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:03.631 [2024-12-11 14:08:56.469408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.915 ms 00:28:03.631 [2024-12-11 14:08:56.469419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.631 [2024-12-11 14:08:56.469520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.632 [2024-12-11 14:08:56.469533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:03.632 [2024-12-11 14:08:56.469544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:03.632 [2024-12-11 14:08:56.469555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.632 [2024-12-11 14:08:56.470692] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.817 ms, result 0 00:28:05.008  [2024-12-11T14:08:58.992Z] Copying: 1776/1048576 [kB] (1776 kBps) [2024-12-11T14:08:59.956Z] Copying: 10204/1048576 [kB] (8428 kBps) [2024-12-11T14:09:00.894Z] Copying: 43/1024 [MB] (33 MBps) [2024-12-11T14:09:01.832Z] Copying: 79/1024 [MB] (35 MBps) [2024-12-11T14:09:02.769Z] Copying: 115/1024 [MB] (35 MBps) [2024-12-11T14:09:03.706Z] Copying: 148/1024 [MB] (33 MBps) [2024-12-11T14:09:05.082Z] Copying: 183/1024 [MB] (34 MBps) [2024-12-11T14:09:06.016Z] Copying: 216/1024 [MB] (33 MBps) [2024-12-11T14:09:06.954Z] Copying: 250/1024 [MB] (34 MBps) [2024-12-11T14:09:07.889Z] Copying: 285/1024 [MB] (34 MBps) [2024-12-11T14:09:08.826Z] Copying: 320/1024 [MB] (35 MBps) [2024-12-11T14:09:09.793Z] Copying: 355/1024 [MB] (35 MBps) [2024-12-11T14:09:10.729Z] Copying: 389/1024 [MB] (34 MBps) [2024-12-11T14:09:12.107Z] Copying: 423/1024 [MB] (34 MBps) [2024-12-11T14:09:12.674Z] Copying: 458/1024 [MB] (34 MBps) [2024-12-11T14:09:14.052Z] Copying: 493/1024 [MB] (34 MBps) [2024-12-11T14:09:14.989Z] Copying: 528/1024 [MB] (34 MBps) [2024-12-11T14:09:15.928Z] Copying: 561/1024 [MB] (33 MBps) [2024-12-11T14:09:16.864Z] Copying: 596/1024 [MB] (34 MBps) [2024-12-11T14:09:17.800Z] Copying: 629/1024 [MB] (33 MBps) [2024-12-11T14:09:18.770Z] Copying: 662/1024 [MB] (32 MBps) [2024-12-11T14:09:19.706Z] Copying: 695/1024 [MB] (33 MBps) [2024-12-11T14:09:21.084Z] Copying: 728/1024 [MB] (32 MBps) [2024-12-11T14:09:21.651Z] Copying: 761/1024 [MB] (32 MBps) [2024-12-11T14:09:23.029Z] Copying: 794/1024 [MB] (33 MBps) [2024-12-11T14:09:23.965Z] Copying: 828/1024 [MB] (33 MBps) [2024-12-11T14:09:24.901Z] Copying: 862/1024 [MB] (33 MBps) [2024-12-11T14:09:25.848Z] Copying: 895/1024 [MB] (33 MBps) [2024-12-11T14:09:26.785Z] Copying: 928/1024 [MB] (33 MBps) [2024-12-11T14:09:27.754Z] Copying: 962/1024 [MB] (33 MBps) [2024-12-11T14:09:28.723Z] Copying: 995/1024 [MB] (33 MBps) [2024-12-11T14:09:28.982Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-12-11 14:09:28.746546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.935 [2024-12-11 14:09:28.747253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:35.935 [2024-12-11 14:09:28.747312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:35.935 [2024-12-11 14:09:28.747340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.935 [2024-12-11 14:09:28.747423] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:35.935 [2024-12-11 14:09:28.758594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.935 [2024-12-11 14:09:28.758851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:35.935 [2024-12-11 14:09:28.758988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.140 ms 00:28:35.935 [2024-12-11 14:09:28.759049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.935 [2024-12-11 14:09:28.759474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.935 [2024-12-11 14:09:28.759659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:35.935 [2024-12-11 14:09:28.759784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:28:35.935 [2024-12-11 14:09:28.759935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.935 [2024-12-11 14:09:28.773875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.935 [2024-12-11 14:09:28.774067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:35.935 [2024-12-11 14:09:28.774094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.879 ms 00:28:35.935 [2024-12-11 14:09:28.774106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.935 [2024-12-11 14:09:28.779539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.935 [2024-12-11 14:09:28.779578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:35.935 [2024-12-11 14:09:28.779601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.391 ms 00:28:35.935 [2024-12-11 14:09:28.779611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.936 [2024-12-11 14:09:28.818308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.936 [2024-12-11 14:09:28.818374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:35.936 [2024-12-11 14:09:28.818390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.720 ms 00:28:35.936 [2024-12-11 14:09:28.818400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.936 [2024-12-11 14:09:28.840235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.936 [2024-12-11 14:09:28.840294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:35.936 [2024-12-11 14:09:28.840311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.807 ms 00:28:35.936 [2024-12-11 14:09:28.840322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.936 [2024-12-11 14:09:28.842530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.936 [2024-12-11 14:09:28.842590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:35.936 [2024-12-11 14:09:28.842605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.151 ms 00:28:35.936 [2024-12-11 14:09:28.842625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.936 [2024-12-11 14:09:28.879313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.936 [2024-12-11 14:09:28.879371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:35.936 [2024-12-11 14:09:28.879387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.728 ms 00:28:35.936 [2024-12-11 14:09:28.879397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.936 [2024-12-11 14:09:28.915859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.936 [2024-12-11 14:09:28.915916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:35.936 [2024-12-11 14:09:28.915932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.467 ms 00:28:35.936 [2024-12-11 14:09:28.915943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.936 [2024-12-11 14:09:28.952174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.936 [2024-12-11 14:09:28.952251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:35.936 [2024-12-11 14:09:28.952268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.240 ms 00:28:35.936 [2024-12-11 14:09:28.952278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.196 [2024-12-11 14:09:28.988857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.196 [2024-12-11 14:09:28.988919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:36.196 [2024-12-11 14:09:28.988936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.513 ms 00:28:36.196 [2024-12-11 14:09:28.988947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.196 [2024-12-11 14:09:28.989003] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:36.196 [2024-12-11 14:09:28.989021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:36.196 [2024-12-11 14:09:28.989035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:36.196 [2024-12-11 14:09:28.989046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:36.196 [2024-12-11 14:09:28.989601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.989989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:36.197 [2024-12-11 14:09:28.990156] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:36.197 [2024-12-11 14:09:28.990174] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d06a086c-22c8-4d8c-a657-f1bbdab1dbcb 00:28:36.197 [2024-12-11 14:09:28.990186] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:36.197 [2024-12-11 14:09:28.990196] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 155584 00:28:36.197 [2024-12-11 14:09:28.990210] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 153600 00:28:36.197 [2024-12-11 14:09:28.990221] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0129 00:28:36.197 [2024-12-11 14:09:28.990231] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:36.197 [2024-12-11 14:09:28.990253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:36.197 [2024-12-11 14:09:28.990264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:36.197 [2024-12-11 14:09:28.990272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:36.197 [2024-12-11 14:09:28.990281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:36.197 [2024-12-11 14:09:28.990292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.197 [2024-12-11 14:09:28.990302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:36.197 [2024-12-11 14:09:28.990322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:28:36.197 [2024-12-11 14:09:28.990332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.197 [2024-12-11 14:09:29.010773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.197 [2024-12-11 14:09:29.010822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:36.197 [2024-12-11 14:09:29.010857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.427 ms 00:28:36.197 [2024-12-11 14:09:29.010867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.197 [2024-12-11 14:09:29.011479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.197 [2024-12-11 14:09:29.011495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:36.197 [2024-12-11 14:09:29.011506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:28:36.197 [2024-12-11 14:09:29.011516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.197 [2024-12-11 14:09:29.064487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.197 [2024-12-11 14:09:29.064546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:36.197 [2024-12-11 14:09:29.064562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.197 [2024-12-11 14:09:29.064572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.197 [2024-12-11 14:09:29.064639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.197 [2024-12-11 14:09:29.064650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:36.197 [2024-12-11 14:09:29.064661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.197 [2024-12-11 14:09:29.064671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.197 [2024-12-11 14:09:29.064778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.197 [2024-12-11 14:09:29.064792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:36.197 [2024-12-11 14:09:29.064803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.197 [2024-12-11 14:09:29.064813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.197 [2024-12-11 14:09:29.064852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.197 [2024-12-11 14:09:29.064864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:36.197 [2024-12-11 14:09:29.064875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.197 [2024-12-11 14:09:29.064885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.197 [2024-12-11 14:09:29.192363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.197 [2024-12-11 14:09:29.192665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:36.197 [2024-12-11 14:09:29.192689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.197 [2024-12-11 14:09:29.192700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.458 [2024-12-11 14:09:29.297217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.458 [2024-12-11 14:09:29.297277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:36.458 [2024-12-11 14:09:29.297292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.458 [2024-12-11 14:09:29.297303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.458 [2024-12-11 14:09:29.297419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.458 [2024-12-11 14:09:29.297436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:36.458 [2024-12-11 14:09:29.297447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.458 [2024-12-11 14:09:29.297457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.458 [2024-12-11 14:09:29.297511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.458 [2024-12-11 14:09:29.297523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:36.458 [2024-12-11 14:09:29.297533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.458 [2024-12-11 14:09:29.297544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.458 [2024-12-11 14:09:29.297658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.458 [2024-12-11 14:09:29.297671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:36.458 [2024-12-11 14:09:29.297687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.458 [2024-12-11 14:09:29.297702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.458 [2024-12-11 14:09:29.297739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.458 [2024-12-11 14:09:29.297752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:36.458 [2024-12-11 14:09:29.297762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.458 [2024-12-11 14:09:29.297772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.458 [2024-12-11 14:09:29.297851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.458 [2024-12-11 14:09:29.297867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:36.458 [2024-12-11 14:09:29.297882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.458 [2024-12-11 14:09:29.297893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.458 [2024-12-11 14:09:29.297942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.458 [2024-12-11 14:09:29.297954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:36.458 [2024-12-11 14:09:29.297964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.458 [2024-12-11 14:09:29.297974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.458 [2024-12-11 14:09:29.298108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 552.493 ms, result 0 00:28:37.395 00:28:37.395 00:28:37.395 14:09:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:39.300 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:39.300 14:09:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:39.300 [2024-12-11 14:09:32.181296] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:28:39.300 [2024-12-11 14:09:32.181424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83995 ] 00:28:39.560 [2024-12-11 14:09:32.364799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.560 [2024-12-11 14:09:32.480183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.819 [2024-12-11 14:09:32.843306] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:39.819 [2024-12-11 14:09:32.843381] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:40.079 [2024-12-11 14:09:33.005360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.005646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:40.079 [2024-12-11 14:09:33.005673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:40.079 [2024-12-11 14:09:33.005685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.005756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.005773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:40.079 [2024-12-11 14:09:33.005784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:40.079 [2024-12-11 14:09:33.005795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.005819] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:40.079 [2024-12-11 14:09:33.006802] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:40.079 [2024-12-11 14:09:33.006834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.006847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:40.079 [2024-12-11 14:09:33.006859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:28:40.079 [2024-12-11 14:09:33.006869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.008325] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:40.079 [2024-12-11 14:09:33.028369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.028550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:40.079 [2024-12-11 14:09:33.028573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.075 ms 00:28:40.079 [2024-12-11 14:09:33.028584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.028666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.028679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:40.079 [2024-12-11 14:09:33.028689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:40.079 [2024-12-11 14:09:33.028700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.035876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.035915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:40.079 [2024-12-11 14:09:33.035928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.101 ms 00:28:40.079 [2024-12-11 14:09:33.035944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.036029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.036043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:40.079 [2024-12-11 14:09:33.036054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:40.079 [2024-12-11 14:09:33.036065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.036115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.036126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:40.079 [2024-12-11 14:09:33.036137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:40.079 [2024-12-11 14:09:33.036147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.036178] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:40.079 [2024-12-11 14:09:33.041146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.041179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:40.079 [2024-12-11 14:09:33.041194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.982 ms 00:28:40.079 [2024-12-11 14:09:33.041204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.041241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.041253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:40.079 [2024-12-11 14:09:33.041264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:40.079 [2024-12-11 14:09:33.041273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.041334] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:40.079 [2024-12-11 14:09:33.041359] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:40.079 [2024-12-11 14:09:33.041396] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:40.079 [2024-12-11 14:09:33.041417] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:40.079 [2024-12-11 14:09:33.041506] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:40.079 [2024-12-11 14:09:33.041519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:40.079 [2024-12-11 14:09:33.041532] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:40.079 [2024-12-11 14:09:33.041545] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:40.079 [2024-12-11 14:09:33.041557] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:40.079 [2024-12-11 14:09:33.041569] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:40.079 [2024-12-11 14:09:33.041579] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:40.079 [2024-12-11 14:09:33.041589] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:40.079 [2024-12-11 14:09:33.041602] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:40.079 [2024-12-11 14:09:33.041612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.041622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:40.079 [2024-12-11 14:09:33.041632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:28:40.079 [2024-12-11 14:09:33.041642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.041717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.079 [2024-12-11 14:09:33.041728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:40.079 [2024-12-11 14:09:33.041738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:40.079 [2024-12-11 14:09:33.041748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.079 [2024-12-11 14:09:33.041854] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:40.079 [2024-12-11 14:09:33.041868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:40.079 [2024-12-11 14:09:33.041879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:40.079 [2024-12-11 14:09:33.041890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.079 [2024-12-11 14:09:33.041900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:40.079 [2024-12-11 14:09:33.041910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:40.079 [2024-12-11 14:09:33.041919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:40.079 [2024-12-11 14:09:33.041930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:40.079 [2024-12-11 14:09:33.041940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:40.079 [2024-12-11 14:09:33.041950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:40.079 [2024-12-11 14:09:33.041960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:40.079 [2024-12-11 14:09:33.041969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:40.079 [2024-12-11 14:09:33.041978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:40.079 [2024-12-11 14:09:33.041999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:40.079 [2024-12-11 14:09:33.042009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:40.079 [2024-12-11 14:09:33.042018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.079 [2024-12-11 14:09:33.042027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:40.079 [2024-12-11 14:09:33.042045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:40.079 [2024-12-11 14:09:33.042055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.079 [2024-12-11 14:09:33.042064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:40.079 [2024-12-11 14:09:33.042073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:40.079 [2024-12-11 14:09:33.042083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.079 [2024-12-11 14:09:33.042091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:40.079 [2024-12-11 14:09:33.042101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:40.079 [2024-12-11 14:09:33.042110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.079 [2024-12-11 14:09:33.042127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:40.079 [2024-12-11 14:09:33.042137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:40.079 [2024-12-11 14:09:33.042146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.079 [2024-12-11 14:09:33.042155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:40.079 [2024-12-11 14:09:33.042165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:40.079 [2024-12-11 14:09:33.042174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.079 [2024-12-11 14:09:33.042183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:40.079 [2024-12-11 14:09:33.042192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:40.080 [2024-12-11 14:09:33.042201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:40.080 [2024-12-11 14:09:33.042211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:40.080 [2024-12-11 14:09:33.042220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:40.080 [2024-12-11 14:09:33.042229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:40.080 [2024-12-11 14:09:33.042238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:40.080 [2024-12-11 14:09:33.042247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:40.080 [2024-12-11 14:09:33.042258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.080 [2024-12-11 14:09:33.042267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:40.080 [2024-12-11 14:09:33.042276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:40.080 [2024-12-11 14:09:33.042285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.080 [2024-12-11 14:09:33.042294] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:40.080 [2024-12-11 14:09:33.042303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:40.080 [2024-12-11 14:09:33.042313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:40.080 [2024-12-11 14:09:33.042323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.080 [2024-12-11 14:09:33.042332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:40.080 [2024-12-11 14:09:33.042341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:40.080 [2024-12-11 14:09:33.042351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:40.080 [2024-12-11 14:09:33.042360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:40.080 [2024-12-11 14:09:33.042369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:40.080 [2024-12-11 14:09:33.042378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:40.080 [2024-12-11 14:09:33.042389] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:40.080 [2024-12-11 14:09:33.042401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:40.080 [2024-12-11 14:09:33.042416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:40.080 [2024-12-11 14:09:33.042427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:40.080 [2024-12-11 14:09:33.042438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:40.080 [2024-12-11 14:09:33.042448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:40.080 [2024-12-11 14:09:33.042458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:40.080 [2024-12-11 14:09:33.042469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:40.080 [2024-12-11 14:09:33.042479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:40.080 [2024-12-11 14:09:33.042489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:40.080 [2024-12-11 14:09:33.042499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:40.080 [2024-12-11 14:09:33.042509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:40.080 [2024-12-11 14:09:33.042519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:40.080 [2024-12-11 14:09:33.042530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:40.080 [2024-12-11 14:09:33.042540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:40.080 [2024-12-11 14:09:33.042550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:40.080 [2024-12-11 14:09:33.042560] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:40.080 [2024-12-11 14:09:33.042571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:40.080 [2024-12-11 14:09:33.042584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:40.080 [2024-12-11 14:09:33.042594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:40.080 [2024-12-11 14:09:33.042604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:40.080 [2024-12-11 14:09:33.042614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:40.080 [2024-12-11 14:09:33.042625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.080 [2024-12-11 14:09:33.042636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:40.080 [2024-12-11 14:09:33.042646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:28:40.080 [2024-12-11 14:09:33.042656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.080 [2024-12-11 14:09:33.079680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.080 [2024-12-11 14:09:33.079892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:40.080 [2024-12-11 14:09:33.079918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.032 ms 00:28:40.080 [2024-12-11 14:09:33.079936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.080 [2024-12-11 14:09:33.080038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.080 [2024-12-11 14:09:33.080050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:40.080 [2024-12-11 14:09:33.080061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:40.080 [2024-12-11 14:09:33.080071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.142793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.142861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:40.340 [2024-12-11 14:09:33.142877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.741 ms 00:28:40.340 [2024-12-11 14:09:33.142887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.142947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.142959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:40.340 [2024-12-11 14:09:33.142975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:40.340 [2024-12-11 14:09:33.142985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.143499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.143520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:40.340 [2024-12-11 14:09:33.143531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:28:40.340 [2024-12-11 14:09:33.143540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.143671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.143685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:40.340 [2024-12-11 14:09:33.143699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:28:40.340 [2024-12-11 14:09:33.143709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.164705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.164907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:40.340 [2024-12-11 14:09:33.165025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.006 ms 00:28:40.340 [2024-12-11 14:09:33.165065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.185375] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:40.340 [2024-12-11 14:09:33.185566] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:40.340 [2024-12-11 14:09:33.185658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.185691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:40.340 [2024-12-11 14:09:33.185724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.463 ms 00:28:40.340 [2024-12-11 14:09:33.185796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.216358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.216567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:40.340 [2024-12-11 14:09:33.216695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.521 ms 00:28:40.340 [2024-12-11 14:09:33.216733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.235761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.235933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:40.340 [2024-12-11 14:09:33.236020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.931 ms 00:28:40.340 [2024-12-11 14:09:33.236055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.254425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.254607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:40.340 [2024-12-11 14:09:33.254689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.282 ms 00:28:40.340 [2024-12-11 14:09:33.254704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.255484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.255609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:40.340 [2024-12-11 14:09:33.255634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:28:40.340 [2024-12-11 14:09:33.255645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.342080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.342385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:40.340 [2024-12-11 14:09:33.342418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.541 ms 00:28:40.340 [2024-12-11 14:09:33.342429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.354469] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:40.340 [2024-12-11 14:09:33.357669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.357832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:40.340 [2024-12-11 14:09:33.357856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.151 ms 00:28:40.340 [2024-12-11 14:09:33.357868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.357980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.357994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:40.340 [2024-12-11 14:09:33.358005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:40.340 [2024-12-11 14:09:33.358019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.358922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.358938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:40.340 [2024-12-11 14:09:33.358949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:28:40.340 [2024-12-11 14:09:33.358960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.358990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.359001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:40.340 [2024-12-11 14:09:33.359012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:40.340 [2024-12-11 14:09:33.359022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.340 [2024-12-11 14:09:33.359060] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:40.340 [2024-12-11 14:09:33.359072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.340 [2024-12-11 14:09:33.359082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:40.340 [2024-12-11 14:09:33.359092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:40.340 [2024-12-11 14:09:33.359102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.600 [2024-12-11 14:09:33.395806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.600 [2024-12-11 14:09:33.396035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:40.600 [2024-12-11 14:09:33.396067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.742 ms 00:28:40.600 [2024-12-11 14:09:33.396078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.600 [2024-12-11 14:09:33.396161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.600 [2024-12-11 14:09:33.396174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:40.600 [2024-12-11 14:09:33.396184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:40.600 [2024-12-11 14:09:33.396195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.600 [2024-12-11 14:09:33.397288] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.117 ms, result 0 00:28:41.978  [2024-12-11T14:09:35.962Z] Copying: 27/1024 [MB] (27 MBps) [2024-12-11T14:09:36.898Z] Copying: 55/1024 [MB] (27 MBps) [2024-12-11T14:09:37.834Z] Copying: 82/1024 [MB] (27 MBps) [2024-12-11T14:09:38.771Z] Copying: 108/1024 [MB] (25 MBps) [2024-12-11T14:09:39.750Z] Copying: 134/1024 [MB] (25 MBps) [2024-12-11T14:09:40.687Z] Copying: 159/1024 [MB] (25 MBps) [2024-12-11T14:09:41.625Z] Copying: 184/1024 [MB] (25 MBps) [2024-12-11T14:09:43.004Z] Copying: 210/1024 [MB] (25 MBps) [2024-12-11T14:09:43.941Z] Copying: 235/1024 [MB] (25 MBps) [2024-12-11T14:09:44.878Z] Copying: 260/1024 [MB] (24 MBps) [2024-12-11T14:09:45.814Z] Copying: 285/1024 [MB] (25 MBps) [2024-12-11T14:09:46.750Z] Copying: 311/1024 [MB] (25 MBps) [2024-12-11T14:09:47.685Z] Copying: 337/1024 [MB] (26 MBps) [2024-12-11T14:09:48.622Z] Copying: 363/1024 [MB] (26 MBps) [2024-12-11T14:09:50.012Z] Copying: 390/1024 [MB] (26 MBps) [2024-12-11T14:09:50.949Z] Copying: 417/1024 [MB] (27 MBps) [2024-12-11T14:09:51.884Z] Copying: 444/1024 [MB] (27 MBps) [2024-12-11T14:09:52.818Z] Copying: 470/1024 [MB] (25 MBps) [2024-12-11T14:09:53.753Z] Copying: 498/1024 [MB] (27 MBps) [2024-12-11T14:09:54.686Z] Copying: 524/1024 [MB] (26 MBps) [2024-12-11T14:09:55.617Z] Copying: 550/1024 [MB] (25 MBps) [2024-12-11T14:09:56.992Z] Copying: 576/1024 [MB] (25 MBps) [2024-12-11T14:09:57.579Z] Copying: 601/1024 [MB] (25 MBps) [2024-12-11T14:09:58.952Z] Copying: 626/1024 [MB] (25 MBps) [2024-12-11T14:09:59.886Z] Copying: 652/1024 [MB] (25 MBps) [2024-12-11T14:10:00.819Z] Copying: 678/1024 [MB] (25 MBps) [2024-12-11T14:10:01.752Z] Copying: 704/1024 [MB] (26 MBps) [2024-12-11T14:10:02.686Z] Copying: 730/1024 [MB] (26 MBps) [2024-12-11T14:10:03.620Z] Copying: 756/1024 [MB] (26 MBps) [2024-12-11T14:10:05.012Z] Copying: 784/1024 [MB] (27 MBps) [2024-12-11T14:10:05.583Z] Copying: 810/1024 [MB] (26 MBps) [2024-12-11T14:10:06.961Z] Copying: 836/1024 [MB] (25 MBps) [2024-12-11T14:10:07.897Z] Copying: 863/1024 [MB] (26 MBps) [2024-12-11T14:10:08.835Z] Copying: 889/1024 [MB] (26 MBps) [2024-12-11T14:10:09.772Z] Copying: 915/1024 [MB] (25 MBps) [2024-12-11T14:10:10.709Z] Copying: 941/1024 [MB] (26 MBps) [2024-12-11T14:10:11.643Z] Copying: 966/1024 [MB] (25 MBps) [2024-12-11T14:10:12.578Z] Copying: 992/1024 [MB] (25 MBps) [2024-12-11T14:10:12.859Z] Copying: 1018/1024 [MB] (25 MBps) [2024-12-11T14:10:12.859Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-11 14:10:12.797155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.812 [2024-12-11 14:10:12.797433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:19.812 [2024-12-11 14:10:12.797582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:19.812 [2024-12-11 14:10:12.797650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.812 [2024-12-11 14:10:12.797877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:19.812 [2024-12-11 14:10:12.806745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.812 [2024-12-11 14:10:12.807006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:19.812 [2024-12-11 14:10:12.807160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.769 ms 00:29:19.812 [2024-12-11 14:10:12.807233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.812 [2024-12-11 14:10:12.807910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.812 [2024-12-11 14:10:12.808085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:19.812 [2024-12-11 14:10:12.808250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:29:19.812 [2024-12-11 14:10:12.808281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.812 [2024-12-11 14:10:12.812994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.812 [2024-12-11 14:10:12.813028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:19.812 [2024-12-11 14:10:12.813044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.684 ms 00:29:19.812 [2024-12-11 14:10:12.813064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.812 [2024-12-11 14:10:12.819791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.812 [2024-12-11 14:10:12.819970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:19.812 [2024-12-11 14:10:12.819997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.714 ms 00:29:19.812 [2024-12-11 14:10:12.820011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.077 [2024-12-11 14:10:12.857152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.077 [2024-12-11 14:10:12.857200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:20.077 [2024-12-11 14:10:12.857215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.131 ms 00:29:20.077 [2024-12-11 14:10:12.857225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.077 [2024-12-11 14:10:12.878578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.077 [2024-12-11 14:10:12.878626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:20.077 [2024-12-11 14:10:12.878642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.340 ms 00:29:20.077 [2024-12-11 14:10:12.878653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.077 [2024-12-11 14:10:12.880910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.077 [2024-12-11 14:10:12.880948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:20.077 [2024-12-11 14:10:12.880961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.207 ms 00:29:20.077 [2024-12-11 14:10:12.880971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.077 [2024-12-11 14:10:12.918203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.077 [2024-12-11 14:10:12.918250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:20.077 [2024-12-11 14:10:12.918265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.272 ms 00:29:20.077 [2024-12-11 14:10:12.918276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.077 [2024-12-11 14:10:12.955099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.077 [2024-12-11 14:10:12.955153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:20.077 [2024-12-11 14:10:12.955169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.833 ms 00:29:20.077 [2024-12-11 14:10:12.955195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.077 [2024-12-11 14:10:12.991375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.077 [2024-12-11 14:10:12.991427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:20.077 [2024-12-11 14:10:12.991442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.172 ms 00:29:20.077 [2024-12-11 14:10:12.991468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.077 [2024-12-11 14:10:13.027038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.077 [2024-12-11 14:10:13.027084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:20.077 [2024-12-11 14:10:13.027098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.534 ms 00:29:20.077 [2024-12-11 14:10:13.027124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.077 [2024-12-11 14:10:13.027166] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:20.077 [2024-12-11 14:10:13.027190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:20.077 [2024-12-11 14:10:13.027207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:20.077 [2024-12-11 14:10:13.027219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:20.077 [2024-12-11 14:10:13.027502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.027996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:20.078 [2024-12-11 14:10:13.028306] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:20.078 [2024-12-11 14:10:13.028316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d06a086c-22c8-4d8c-a657-f1bbdab1dbcb 00:29:20.078 [2024-12-11 14:10:13.028327] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:20.078 [2024-12-11 14:10:13.028337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:20.078 [2024-12-11 14:10:13.028346] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:20.078 [2024-12-11 14:10:13.028357] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:20.078 [2024-12-11 14:10:13.028377] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:20.078 [2024-12-11 14:10:13.028387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:20.078 [2024-12-11 14:10:13.028397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:20.078 [2024-12-11 14:10:13.028407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:20.078 [2024-12-11 14:10:13.028416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:20.078 [2024-12-11 14:10:13.028426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.078 [2024-12-11 14:10:13.028435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:20.078 [2024-12-11 14:10:13.028446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:29:20.078 [2024-12-11 14:10:13.028459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.078 [2024-12-11 14:10:13.048309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.078 [2024-12-11 14:10:13.048484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:20.078 [2024-12-11 14:10:13.048505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.846 ms 00:29:20.078 [2024-12-11 14:10:13.048515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.078 [2024-12-11 14:10:13.049028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.078 [2024-12-11 14:10:13.049048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:20.078 [2024-12-11 14:10:13.049065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:29:20.078 [2024-12-11 14:10:13.049074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.078 [2024-12-11 14:10:13.099749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.078 [2024-12-11 14:10:13.099797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:20.078 [2024-12-11 14:10:13.099811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.078 [2024-12-11 14:10:13.099838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.079 [2024-12-11 14:10:13.099915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.079 [2024-12-11 14:10:13.099931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:20.079 [2024-12-11 14:10:13.099942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.079 [2024-12-11 14:10:13.099952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.079 [2024-12-11 14:10:13.100044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.079 [2024-12-11 14:10:13.100057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:20.079 [2024-12-11 14:10:13.100068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.079 [2024-12-11 14:10:13.100079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.079 [2024-12-11 14:10:13.100096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.079 [2024-12-11 14:10:13.100106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:20.079 [2024-12-11 14:10:13.100122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.079 [2024-12-11 14:10:13.100131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.337 [2024-12-11 14:10:13.221576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.337 [2024-12-11 14:10:13.221888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:20.337 [2024-12-11 14:10:13.221912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.337 [2024-12-11 14:10:13.221923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.337 [2024-12-11 14:10:13.321009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.338 [2024-12-11 14:10:13.321290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:20.338 [2024-12-11 14:10:13.321313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.338 [2024-12-11 14:10:13.321324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.338 [2024-12-11 14:10:13.321443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.338 [2024-12-11 14:10:13.321456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:20.338 [2024-12-11 14:10:13.321467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.338 [2024-12-11 14:10:13.321477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.338 [2024-12-11 14:10:13.321515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.338 [2024-12-11 14:10:13.321526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:20.338 [2024-12-11 14:10:13.321537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.338 [2024-12-11 14:10:13.321550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.338 [2024-12-11 14:10:13.321670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.338 [2024-12-11 14:10:13.321683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:20.338 [2024-12-11 14:10:13.321694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.338 [2024-12-11 14:10:13.321704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.338 [2024-12-11 14:10:13.321738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.338 [2024-12-11 14:10:13.321751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:20.338 [2024-12-11 14:10:13.321761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.338 [2024-12-11 14:10:13.321771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.338 [2024-12-11 14:10:13.321811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.338 [2024-12-11 14:10:13.321822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:20.338 [2024-12-11 14:10:13.321859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.338 [2024-12-11 14:10:13.321869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.338 [2024-12-11 14:10:13.321910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.338 [2024-12-11 14:10:13.321922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:20.338 [2024-12-11 14:10:13.321933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.338 [2024-12-11 14:10:13.321946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.338 [2024-12-11 14:10:13.322084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.766 ms, result 0 00:29:21.711 00:29:21.711 00:29:21.711 14:10:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:23.084 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:23.084 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:23.084 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:23.084 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:23.084 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:23.350 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:23.350 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:23.350 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:23.350 Process with pid 82209 is not found 00:29:23.350 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82209 00:29:23.350 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82209 ']' 00:29:23.350 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 82209 00:29:23.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82209) - No such process 00:29:23.350 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 82209 is not found' 00:29:23.350 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:23.918 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:23.918 14:10:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:23.918 Remove shared memory files 00:29:23.918 14:10:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:23.918 14:10:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:23.918 14:10:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:23.918 14:10:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:23.918 14:10:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:23.918 ************************************ 00:29:23.918 END TEST ftl_dirty_shutdown 00:29:23.918 ************************************ 00:29:23.918 00:29:23.918 real 3m36.141s 00:29:23.918 user 4m4.734s 00:29:23.918 sys 0m38.443s 00:29:23.918 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.918 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:23.918 14:10:16 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:23.918 14:10:16 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:23.918 14:10:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.918 14:10:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:23.918 ************************************ 00:29:23.918 START TEST ftl_upgrade_shutdown 00:29:23.918 ************************************ 00:29:23.918 14:10:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:23.918 * Looking for test storage... 00:29:23.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:23.918 14:10:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:23.918 14:10:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:23.918 14:10:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:24.177 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:24.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.178 --rc genhtml_branch_coverage=1 00:29:24.178 --rc genhtml_function_coverage=1 00:29:24.178 --rc genhtml_legend=1 00:29:24.178 --rc geninfo_all_blocks=1 00:29:24.178 --rc geninfo_unexecuted_blocks=1 00:29:24.178 00:29:24.178 ' 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:24.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.178 --rc genhtml_branch_coverage=1 00:29:24.178 --rc genhtml_function_coverage=1 00:29:24.178 --rc genhtml_legend=1 00:29:24.178 --rc geninfo_all_blocks=1 00:29:24.178 --rc geninfo_unexecuted_blocks=1 00:29:24.178 00:29:24.178 ' 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:24.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.178 --rc genhtml_branch_coverage=1 00:29:24.178 --rc genhtml_function_coverage=1 00:29:24.178 --rc genhtml_legend=1 00:29:24.178 --rc geninfo_all_blocks=1 00:29:24.178 --rc geninfo_unexecuted_blocks=1 00:29:24.178 00:29:24.178 ' 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:24.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.178 --rc genhtml_branch_coverage=1 00:29:24.178 --rc genhtml_function_coverage=1 00:29:24.178 --rc genhtml_legend=1 00:29:24.178 --rc geninfo_all_blocks=1 00:29:24.178 --rc geninfo_unexecuted_blocks=1 00:29:24.178 00:29:24.178 ' 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:24.178 14:10:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84507 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84507 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84507 ']' 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.178 14:10:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:24.178 [2024-12-11 14:10:17.142715] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:24.178 [2024-12-11 14:10:17.143100] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84507 ] 00:29:24.437 [2024-12-11 14:10:17.321882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.437 [2024-12-11 14:10:17.425183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:25.374 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:25.375 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:25.634 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:25.634 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:25.634 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:25.634 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:25.634 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:25.634 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:25.634 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:25.634 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:25.892 { 00:29:25.892 "name": "basen1", 00:29:25.892 "aliases": [ 00:29:25.892 "93501628-1f97-4ca1-8ac0-0bde0488ab10" 00:29:25.892 ], 00:29:25.892 "product_name": "NVMe disk", 00:29:25.892 "block_size": 4096, 00:29:25.892 "num_blocks": 1310720, 00:29:25.892 "uuid": "93501628-1f97-4ca1-8ac0-0bde0488ab10", 00:29:25.892 "numa_id": -1, 00:29:25.892 "assigned_rate_limits": { 00:29:25.892 "rw_ios_per_sec": 0, 00:29:25.892 "rw_mbytes_per_sec": 0, 00:29:25.892 "r_mbytes_per_sec": 0, 00:29:25.892 "w_mbytes_per_sec": 0 00:29:25.892 }, 00:29:25.892 "claimed": true, 00:29:25.892 "claim_type": "read_many_write_one", 00:29:25.892 "zoned": false, 00:29:25.892 "supported_io_types": { 00:29:25.892 "read": true, 00:29:25.892 "write": true, 00:29:25.892 "unmap": true, 00:29:25.892 "flush": true, 00:29:25.892 "reset": true, 00:29:25.892 "nvme_admin": true, 00:29:25.892 "nvme_io": true, 00:29:25.892 "nvme_io_md": false, 00:29:25.892 "write_zeroes": true, 00:29:25.892 "zcopy": false, 00:29:25.892 "get_zone_info": false, 00:29:25.892 "zone_management": false, 00:29:25.892 "zone_append": false, 00:29:25.892 "compare": true, 00:29:25.892 "compare_and_write": false, 00:29:25.892 "abort": true, 00:29:25.892 "seek_hole": false, 00:29:25.892 "seek_data": false, 00:29:25.892 "copy": true, 00:29:25.892 "nvme_iov_md": false 00:29:25.892 }, 00:29:25.892 "driver_specific": { 00:29:25.892 "nvme": [ 00:29:25.892 { 00:29:25.892 "pci_address": "0000:00:11.0", 00:29:25.892 "trid": { 00:29:25.892 "trtype": "PCIe", 00:29:25.892 "traddr": "0000:00:11.0" 00:29:25.892 }, 00:29:25.892 "ctrlr_data": { 00:29:25.892 "cntlid": 0, 00:29:25.892 "vendor_id": "0x1b36", 00:29:25.892 "model_number": "QEMU NVMe Ctrl", 00:29:25.892 "serial_number": "12341", 00:29:25.892 "firmware_revision": "8.0.0", 00:29:25.892 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:25.892 "oacs": { 00:29:25.892 "security": 0, 00:29:25.892 "format": 1, 00:29:25.892 "firmware": 0, 00:29:25.892 "ns_manage": 1 00:29:25.892 }, 00:29:25.892 "multi_ctrlr": false, 00:29:25.892 "ana_reporting": false 00:29:25.892 }, 00:29:25.892 "vs": { 00:29:25.892 "nvme_version": "1.4" 00:29:25.892 }, 00:29:25.892 "ns_data": { 00:29:25.892 "id": 1, 00:29:25.892 "can_share": false 00:29:25.892 } 00:29:25.892 } 00:29:25.892 ], 00:29:25.892 "mp_policy": "active_passive" 00:29:25.892 } 00:29:25.892 } 00:29:25.892 ]' 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:25.892 14:10:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:26.150 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a0ee91dc-b81e-4958-b749-578aa0ce787d 00:29:26.150 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:26.150 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0ee91dc-b81e-4958-b749-578aa0ce787d 00:29:26.409 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=d0b17b41-0ba7-4dd6-8a79-69dcc24e10d6 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d0b17b41-0ba7-4dd6-8a79-69dcc24e10d6 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=ed5c919f-aa90-4d35-b99f-18679a710cdb 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z ed5c919f-aa90-4d35-b99f-18679a710cdb ]] 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 ed5c919f-aa90-4d35-b99f-18679a710cdb 5120 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=ed5c919f-aa90-4d35-b99f-18679a710cdb 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size ed5c919f-aa90-4d35-b99f-18679a710cdb 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ed5c919f-aa90-4d35-b99f-18679a710cdb 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:26.667 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ed5c919f-aa90-4d35-b99f-18679a710cdb 00:29:26.925 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:26.925 { 00:29:26.925 "name": "ed5c919f-aa90-4d35-b99f-18679a710cdb", 00:29:26.925 "aliases": [ 00:29:26.925 "lvs/basen1p0" 00:29:26.925 ], 00:29:26.925 "product_name": "Logical Volume", 00:29:26.925 "block_size": 4096, 00:29:26.925 "num_blocks": 5242880, 00:29:26.925 "uuid": "ed5c919f-aa90-4d35-b99f-18679a710cdb", 00:29:26.925 "assigned_rate_limits": { 00:29:26.925 "rw_ios_per_sec": 0, 00:29:26.925 "rw_mbytes_per_sec": 0, 00:29:26.925 "r_mbytes_per_sec": 0, 00:29:26.925 "w_mbytes_per_sec": 0 00:29:26.925 }, 00:29:26.925 "claimed": false, 00:29:26.925 "zoned": false, 00:29:26.925 "supported_io_types": { 00:29:26.925 "read": true, 00:29:26.925 "write": true, 00:29:26.925 "unmap": true, 00:29:26.925 "flush": false, 00:29:26.925 "reset": true, 00:29:26.925 "nvme_admin": false, 00:29:26.925 "nvme_io": false, 00:29:26.925 "nvme_io_md": false, 00:29:26.925 "write_zeroes": true, 00:29:26.925 "zcopy": false, 00:29:26.925 "get_zone_info": false, 00:29:26.925 "zone_management": false, 00:29:26.925 "zone_append": false, 00:29:26.925 "compare": false, 00:29:26.925 "compare_and_write": false, 00:29:26.925 "abort": false, 00:29:26.925 "seek_hole": true, 00:29:26.925 "seek_data": true, 00:29:26.925 "copy": false, 00:29:26.925 "nvme_iov_md": false 00:29:26.925 }, 00:29:26.925 "driver_specific": { 00:29:26.925 "lvol": { 00:29:26.925 "lvol_store_uuid": "d0b17b41-0ba7-4dd6-8a79-69dcc24e10d6", 00:29:26.925 "base_bdev": "basen1", 00:29:26.925 "thin_provision": true, 00:29:26.925 "num_allocated_clusters": 0, 00:29:26.925 "snapshot": false, 00:29:26.925 "clone": false, 00:29:26.925 "esnap_clone": false 00:29:26.925 } 00:29:26.925 } 00:29:26.925 } 00:29:26.925 ]' 00:29:26.925 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:26.925 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:26.925 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:27.186 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:27.186 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:27.186 14:10:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:27.186 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:27.186 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:27.186 14:10:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:27.445 14:10:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:27.445 14:10:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:27.445 14:10:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:27.445 14:10:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:27.445 14:10:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:27.445 14:10:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d ed5c919f-aa90-4d35-b99f-18679a710cdb -c cachen1p0 --l2p_dram_limit 2 00:29:27.704 [2024-12-11 14:10:20.633519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.633820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:27.704 [2024-12-11 14:10:20.633861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:27.704 [2024-12-11 14:10:20.633872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.633966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.633980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:27.704 [2024-12-11 14:10:20.633993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:27.704 [2024-12-11 14:10:20.634004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.634028] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:27.704 [2024-12-11 14:10:20.635120] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:27.704 [2024-12-11 14:10:20.635155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.635167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:27.704 [2024-12-11 14:10:20.635182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.131 ms 00:29:27.704 [2024-12-11 14:10:20.635193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.635428] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID dbe95b4b-8755-43da-acf2-d6b0cf13ae48 00:29:27.704 [2024-12-11 14:10:20.636888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.636924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:27.704 [2024-12-11 14:10:20.636937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:27.704 [2024-12-11 14:10:20.636950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.644526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.644561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:27.704 [2024-12-11 14:10:20.644573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.548 ms 00:29:27.704 [2024-12-11 14:10:20.644602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.644648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.644665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:27.704 [2024-12-11 14:10:20.644676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:27.704 [2024-12-11 14:10:20.644691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.644767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.644784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:27.704 [2024-12-11 14:10:20.644795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:27.704 [2024-12-11 14:10:20.644821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.644862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:27.704 [2024-12-11 14:10:20.649462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.649494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:27.704 [2024-12-11 14:10:20.649511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.611 ms 00:29:27.704 [2024-12-11 14:10:20.649521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.649554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.649564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:27.704 [2024-12-11 14:10:20.649577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:27.704 [2024-12-11 14:10:20.649587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.649624] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:27.704 [2024-12-11 14:10:20.649753] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:27.704 [2024-12-11 14:10:20.649772] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:27.704 [2024-12-11 14:10:20.649786] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:27.704 [2024-12-11 14:10:20.649802] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:27.704 [2024-12-11 14:10:20.649813] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:27.704 [2024-12-11 14:10:20.649842] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:27.704 [2024-12-11 14:10:20.649852] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:27.704 [2024-12-11 14:10:20.649886] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:27.704 [2024-12-11 14:10:20.649896] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:27.704 [2024-12-11 14:10:20.649909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.649920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:27.704 [2024-12-11 14:10:20.649932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.287 ms 00:29:27.704 [2024-12-11 14:10:20.649943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.650020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.704 [2024-12-11 14:10:20.650041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:27.704 [2024-12-11 14:10:20.650055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:29:27.704 [2024-12-11 14:10:20.650065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.704 [2024-12-11 14:10:20.650175] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:27.704 [2024-12-11 14:10:20.650189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:27.704 [2024-12-11 14:10:20.650202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:27.704 [2024-12-11 14:10:20.650212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:27.705 [2024-12-11 14:10:20.650234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:27.705 [2024-12-11 14:10:20.650256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:27.705 [2024-12-11 14:10:20.650268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:27.705 [2024-12-11 14:10:20.650277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:27.705 [2024-12-11 14:10:20.650299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:27.705 [2024-12-11 14:10:20.650313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:27.705 [2024-12-11 14:10:20.650334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:27.705 [2024-12-11 14:10:20.650343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:27.705 [2024-12-11 14:10:20.650367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:27.705 [2024-12-11 14:10:20.650379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:27.705 [2024-12-11 14:10:20.650400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:27.705 [2024-12-11 14:10:20.650409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:27.705 [2024-12-11 14:10:20.650421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:27.705 [2024-12-11 14:10:20.650430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:27.705 [2024-12-11 14:10:20.650442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:27.705 [2024-12-11 14:10:20.650452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:27.705 [2024-12-11 14:10:20.650463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:27.705 [2024-12-11 14:10:20.650472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:27.705 [2024-12-11 14:10:20.650484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:27.705 [2024-12-11 14:10:20.650493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:27.705 [2024-12-11 14:10:20.650505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:27.705 [2024-12-11 14:10:20.650514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:27.705 [2024-12-11 14:10:20.650528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:27.705 [2024-12-11 14:10:20.650537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:27.705 [2024-12-11 14:10:20.650558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:27.705 [2024-12-11 14:10:20.650569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:27.705 [2024-12-11 14:10:20.650592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:27.705 [2024-12-11 14:10:20.650622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:27.705 [2024-12-11 14:10:20.650633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650643] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:27.705 [2024-12-11 14:10:20.650656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:27.705 [2024-12-11 14:10:20.650665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:27.705 [2024-12-11 14:10:20.650677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.705 [2024-12-11 14:10:20.650688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:27.705 [2024-12-11 14:10:20.650703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:27.705 [2024-12-11 14:10:20.650713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:27.705 [2024-12-11 14:10:20.650725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:27.705 [2024-12-11 14:10:20.650734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:27.705 [2024-12-11 14:10:20.650746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:27.705 [2024-12-11 14:10:20.650758] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:27.705 [2024-12-11 14:10:20.650773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.650787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:27.705 [2024-12-11 14:10:20.650800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.650810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.650833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:27.705 [2024-12-11 14:10:20.650845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:27.705 [2024-12-11 14:10:20.650858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:27.705 [2024-12-11 14:10:20.650868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:27.705 [2024-12-11 14:10:20.650881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.650891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.650908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.650918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.650945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.650955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.650968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:27.705 [2024-12-11 14:10:20.650981] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:27.705 [2024-12-11 14:10:20.650995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.651006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:27.705 [2024-12-11 14:10:20.651018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:27.705 [2024-12-11 14:10:20.651029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:27.705 [2024-12-11 14:10:20.651041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:27.705 [2024-12-11 14:10:20.651053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.705 [2024-12-11 14:10:20.651065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:27.705 [2024-12-11 14:10:20.651075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.946 ms 00:29:27.705 [2024-12-11 14:10:20.651087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.705 [2024-12-11 14:10:20.651128] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:27.705 [2024-12-11 14:10:20.651150] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:31.896 [2024-12-11 14:10:24.131083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.131152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:31.896 [2024-12-11 14:10:24.131170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3485.603 ms 00:29:31.896 [2024-12-11 14:10:24.131183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.168201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.168497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:31.896 [2024-12-11 14:10:24.168525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.768 ms 00:29:31.896 [2024-12-11 14:10:24.168539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.168640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.168657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:31.896 [2024-12-11 14:10:24.168669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:31.896 [2024-12-11 14:10:24.168688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.212180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.212232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:31.896 [2024-12-11 14:10:24.212247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.519 ms 00:29:31.896 [2024-12-11 14:10:24.212260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.212299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.212317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:31.896 [2024-12-11 14:10:24.212329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:31.896 [2024-12-11 14:10:24.212341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.212850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.212886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:31.896 [2024-12-11 14:10:24.212907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:29:31.896 [2024-12-11 14:10:24.212921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.212972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.212986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:31.896 [2024-12-11 14:10:24.213000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:31.896 [2024-12-11 14:10:24.213015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.232939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.232987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:31.896 [2024-12-11 14:10:24.233002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.936 ms 00:29:31.896 [2024-12-11 14:10:24.233015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.256009] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:31.896 [2024-12-11 14:10:24.257254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.257288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:31.896 [2024-12-11 14:10:24.257306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.179 ms 00:29:31.896 [2024-12-11 14:10:24.257319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.289753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.289941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:31.896 [2024-12-11 14:10:24.289973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.444 ms 00:29:31.896 [2024-12-11 14:10:24.289984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.290079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.290096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:31.896 [2024-12-11 14:10:24.290113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:29:31.896 [2024-12-11 14:10:24.290132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.326759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.326805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:31.896 [2024-12-11 14:10:24.326836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.620 ms 00:29:31.896 [2024-12-11 14:10:24.326847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.362262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.362304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:31.896 [2024-12-11 14:10:24.362322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.418 ms 00:29:31.896 [2024-12-11 14:10:24.362332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.363043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.363067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:31.896 [2024-12-11 14:10:24.363081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.668 ms 00:29:31.896 [2024-12-11 14:10:24.363095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.461460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.461513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:31.896 [2024-12-11 14:10:24.461552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 98.460 ms 00:29:31.896 [2024-12-11 14:10:24.461564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.499098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.499148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:31.896 [2024-12-11 14:10:24.499167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.502 ms 00:29:31.896 [2024-12-11 14:10:24.499194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.535757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.535803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:31.896 [2024-12-11 14:10:24.535820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.573 ms 00:29:31.896 [2024-12-11 14:10:24.535842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.572041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.572083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:31.896 [2024-12-11 14:10:24.572101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.194 ms 00:29:31.896 [2024-12-11 14:10:24.572111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.572158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.572169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:31.896 [2024-12-11 14:10:24.572186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:31.896 [2024-12-11 14:10:24.572196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.572296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.896 [2024-12-11 14:10:24.572311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:31.896 [2024-12-11 14:10:24.572323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:31.896 [2024-12-11 14:10:24.572333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.896 [2024-12-11 14:10:24.573532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3946.009 ms, result 0 00:29:31.896 { 00:29:31.896 "name": "ftl", 00:29:31.896 "uuid": "dbe95b4b-8755-43da-acf2-d6b0cf13ae48" 00:29:31.896 } 00:29:31.896 14:10:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:31.896 [2024-12-11 14:10:24.804266] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.896 14:10:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:32.155 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:32.414 [2024-12-11 14:10:25.200120] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:32.414 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:32.414 [2024-12-11 14:10:25.405888] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:32.414 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:32.982 Fill FTL, iteration 1 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84634 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:32.982 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:32.983 14:10:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84634 /var/tmp/spdk.tgt.sock 00:29:32.983 14:10:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84634 ']' 00:29:32.983 14:10:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:32.983 14:10:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.983 14:10:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:32.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:32.983 14:10:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.983 14:10:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:32.983 [2024-12-11 14:10:25.864038] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:32.983 [2024-12-11 14:10:25.864188] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84634 ] 00:29:33.242 [2024-12-11 14:10:26.043580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.242 [2024-12-11 14:10:26.149333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.180 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.180 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:34.180 14:10:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:34.440 ftln1 00:29:34.440 14:10:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:34.440 14:10:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:34.699 14:10:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84634 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84634 ']' 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84634 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84634 00:29:34.700 killing process with pid 84634 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84634' 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84634 00:29:34.700 14:10:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84634 00:29:37.236 14:10:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:37.236 14:10:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:37.236 [2024-12-11 14:10:30.216421] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:37.236 [2024-12-11 14:10:30.216561] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84692 ] 00:29:37.570 [2024-12-11 14:10:30.400162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.570 [2024-12-11 14:10:30.540788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.476  [2024-12-11T14:10:33.091Z] Copying: 258/1024 [MB] (258 MBps) [2024-12-11T14:10:34.470Z] Copying: 507/1024 [MB] (249 MBps) [2024-12-11T14:10:35.406Z] Copying: 756/1024 [MB] (249 MBps) [2024-12-11T14:10:35.406Z] Copying: 1023/1024 [MB] (267 MBps) [2024-12-11T14:10:36.343Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:29:43.296 00:29:43.296 Calculate MD5 checksum, iteration 1 00:29:43.296 14:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:43.296 14:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:43.296 14:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:43.296 14:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:43.296 14:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:43.296 14:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:43.296 14:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:43.296 14:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:43.555 [2024-12-11 14:10:36.374736] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:43.555 [2024-12-11 14:10:36.375044] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84756 ] 00:29:43.555 [2024-12-11 14:10:36.552596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.814 [2024-12-11 14:10:36.694972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.193  [2024-12-11T14:10:38.809Z] Copying: 711/1024 [MB] (711 MBps) [2024-12-11T14:10:39.746Z] Copying: 1024/1024 [MB] (average 707 MBps) 00:29:46.699 00:29:46.699 14:10:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:46.699 14:10:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:48.604 Fill FTL, iteration 2 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6d82c68f0392dfbe62ebb1fce2778f28 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:48.604 14:10:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:48.604 [2024-12-11 14:10:41.223584] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:48.604 [2024-12-11 14:10:41.223856] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84812 ] 00:29:48.604 [2024-12-11 14:10:41.404209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.604 [2024-12-11 14:10:41.506543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.983  [2024-12-11T14:10:43.967Z] Copying: 248/1024 [MB] (248 MBps) [2024-12-11T14:10:45.348Z] Copying: 489/1024 [MB] (241 MBps) [2024-12-11T14:10:46.283Z] Copying: 726/1024 [MB] (237 MBps) [2024-12-11T14:10:46.283Z] Copying: 969/1024 [MB] (243 MBps) [2024-12-11T14:10:47.663Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:29:54.616 00:29:54.616 Calculate MD5 checksum, iteration 2 00:29:54.616 14:10:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:54.616 14:10:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:54.616 14:10:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:54.616 14:10:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:54.616 14:10:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:54.616 14:10:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:54.616 14:10:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:54.616 14:10:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:54.616 [2024-12-11 14:10:47.389497] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:54.616 [2024-12-11 14:10:47.389762] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84876 ] 00:29:54.616 [2024-12-11 14:10:47.570784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.875 [2024-12-11 14:10:47.677598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.780  [2024-12-11T14:10:49.827Z] Copying: 711/1024 [MB] (711 MBps) [2024-12-11T14:10:51.207Z] Copying: 1024/1024 [MB] (average 693 MBps) 00:29:58.160 00:29:58.160 14:10:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:58.160 14:10:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:00.066 14:10:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:00.066 14:10:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2705dc1017c2707dff658ec23b0b2efb 00:30:00.066 14:10:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:00.066 14:10:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:00.066 14:10:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:00.066 [2024-12-11 14:10:52.846362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.066 [2024-12-11 14:10:52.846414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:00.066 [2024-12-11 14:10:52.846431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:00.066 [2024-12-11 14:10:52.846441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.067 [2024-12-11 14:10:52.846490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.067 [2024-12-11 14:10:52.846506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:00.067 [2024-12-11 14:10:52.846517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:00.067 [2024-12-11 14:10:52.846527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.067 [2024-12-11 14:10:52.846547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.067 [2024-12-11 14:10:52.846558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:00.067 [2024-12-11 14:10:52.846568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:00.067 [2024-12-11 14:10:52.846578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.067 [2024-12-11 14:10:52.846641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.283 ms, result 0 00:30:00.067 true 00:30:00.067 14:10:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:00.067 { 00:30:00.067 "name": "ftl", 00:30:00.067 "properties": [ 00:30:00.067 { 00:30:00.067 "name": "superblock_version", 00:30:00.067 "value": 5, 00:30:00.067 "read-only": true 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "name": "base_device", 00:30:00.067 "bands": [ 00:30:00.067 { 00:30:00.067 "id": 0, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 1, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 2, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 3, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 4, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 5, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 6, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 7, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 8, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 9, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 10, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 11, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 12, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 13, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 14, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 15, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 16, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 17, 00:30:00.067 "state": "FREE", 00:30:00.067 "validity": 0.0 00:30:00.067 } 00:30:00.067 ], 00:30:00.067 "read-only": true 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "name": "cache_device", 00:30:00.067 "type": "bdev", 00:30:00.067 "chunks": [ 00:30:00.067 { 00:30:00.067 "id": 0, 00:30:00.067 "state": "INACTIVE", 00:30:00.067 "utilization": 0.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 1, 00:30:00.067 "state": "CLOSED", 00:30:00.067 "utilization": 1.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 2, 00:30:00.067 "state": "CLOSED", 00:30:00.067 "utilization": 1.0 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 3, 00:30:00.067 "state": "OPEN", 00:30:00.067 "utilization": 0.001953125 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "id": 4, 00:30:00.067 "state": "OPEN", 00:30:00.067 "utilization": 0.0 00:30:00.067 } 00:30:00.067 ], 00:30:00.067 "read-only": true 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "name": "verbose_mode", 00:30:00.067 "value": true, 00:30:00.067 "unit": "", 00:30:00.067 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:00.067 }, 00:30:00.067 { 00:30:00.067 "name": "prep_upgrade_on_shutdown", 00:30:00.067 "value": false, 00:30:00.067 "unit": "", 00:30:00.067 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:00.067 } 00:30:00.067 ] 00:30:00.067 } 00:30:00.067 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:00.327 [2024-12-11 14:10:53.244035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.327 [2024-12-11 14:10:53.244082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:00.327 [2024-12-11 14:10:53.244098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:00.327 [2024-12-11 14:10:53.244107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.327 [2024-12-11 14:10:53.244150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.327 [2024-12-11 14:10:53.244161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:00.327 [2024-12-11 14:10:53.244171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:00.327 [2024-12-11 14:10:53.244180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.327 [2024-12-11 14:10:53.244200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.327 [2024-12-11 14:10:53.244211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:00.327 [2024-12-11 14:10:53.244221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:00.327 [2024-12-11 14:10:53.244230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.327 [2024-12-11 14:10:53.244287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.248 ms, result 0 00:30:00.327 true 00:30:00.327 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:00.327 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:00.327 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:00.586 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:00.586 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:00.586 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:00.857 [2024-12-11 14:10:53.647989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.857 [2024-12-11 14:10:53.648046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:00.857 [2024-12-11 14:10:53.648078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:00.857 [2024-12-11 14:10:53.648088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.857 [2024-12-11 14:10:53.648113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.857 [2024-12-11 14:10:53.648124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:00.857 [2024-12-11 14:10:53.648134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:00.857 [2024-12-11 14:10:53.648145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.857 [2024-12-11 14:10:53.648164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.857 [2024-12-11 14:10:53.648175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:00.857 [2024-12-11 14:10:53.648185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:00.857 [2024-12-11 14:10:53.648194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.857 [2024-12-11 14:10:53.648251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.253 ms, result 0 00:30:00.857 true 00:30:00.857 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:00.857 { 00:30:00.857 "name": "ftl", 00:30:00.857 "properties": [ 00:30:00.857 { 00:30:00.857 "name": "superblock_version", 00:30:00.857 "value": 5, 00:30:00.857 "read-only": true 00:30:00.857 }, 00:30:00.857 { 00:30:00.857 "name": "base_device", 00:30:00.857 "bands": [ 00:30:00.857 { 00:30:00.857 "id": 0, 00:30:00.857 "state": "FREE", 00:30:00.857 "validity": 0.0 00:30:00.857 }, 00:30:00.857 { 00:30:00.857 "id": 1, 00:30:00.857 "state": "FREE", 00:30:00.857 "validity": 0.0 00:30:00.857 }, 00:30:00.857 { 00:30:00.857 "id": 2, 00:30:00.857 "state": "FREE", 00:30:00.857 "validity": 0.0 00:30:00.857 }, 00:30:00.857 { 00:30:00.857 "id": 3, 00:30:00.857 "state": "FREE", 00:30:00.857 "validity": 0.0 00:30:00.857 }, 00:30:00.857 { 00:30:00.857 "id": 4, 00:30:00.857 "state": "FREE", 00:30:00.857 "validity": 0.0 00:30:00.857 }, 00:30:00.857 { 00:30:00.857 "id": 5, 00:30:00.857 "state": "FREE", 00:30:00.857 "validity": 0.0 00:30:00.857 }, 00:30:00.857 { 00:30:00.858 "id": 6, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 7, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 8, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 9, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 10, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 11, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 12, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 13, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 14, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 15, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 16, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 17, 00:30:00.858 "state": "FREE", 00:30:00.858 "validity": 0.0 00:30:00.858 } 00:30:00.858 ], 00:30:00.858 "read-only": true 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "name": "cache_device", 00:30:00.858 "type": "bdev", 00:30:00.858 "chunks": [ 00:30:00.858 { 00:30:00.858 "id": 0, 00:30:00.858 "state": "INACTIVE", 00:30:00.858 "utilization": 0.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 1, 00:30:00.858 "state": "CLOSED", 00:30:00.858 "utilization": 1.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 2, 00:30:00.858 "state": "CLOSED", 00:30:00.858 "utilization": 1.0 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 3, 00:30:00.858 "state": "OPEN", 00:30:00.858 "utilization": 0.001953125 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "id": 4, 00:30:00.858 "state": "OPEN", 00:30:00.858 "utilization": 0.0 00:30:00.858 } 00:30:00.858 ], 00:30:00.858 "read-only": true 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "name": "verbose_mode", 00:30:00.858 "value": true, 00:30:00.858 "unit": "", 00:30:00.858 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:00.858 }, 00:30:00.858 { 00:30:00.858 "name": "prep_upgrade_on_shutdown", 00:30:00.858 "value": true, 00:30:00.858 "unit": "", 00:30:00.858 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:00.858 } 00:30:00.858 ] 00:30:00.858 } 00:30:00.858 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:00.858 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84507 ]] 00:30:00.858 14:10:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84507 00:30:00.858 14:10:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84507 ']' 00:30:00.858 14:10:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84507 00:30:00.858 14:10:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:00.858 14:10:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.858 14:10:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84507 00:30:01.159 killing process with pid 84507 00:30:01.159 14:10:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:01.159 14:10:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:01.159 14:10:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84507' 00:30:01.159 14:10:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84507 00:30:01.159 14:10:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84507 00:30:02.219 [2024-12-11 14:10:54.945464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:02.219 [2024-12-11 14:10:54.964245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.219 [2024-12-11 14:10:54.964283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:02.219 [2024-12-11 14:10:54.964297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:02.219 [2024-12-11 14:10:54.964307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.219 [2024-12-11 14:10:54.964344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:02.219 [2024-12-11 14:10:54.968338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.219 [2024-12-11 14:10:54.968367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:02.219 [2024-12-11 14:10:54.968380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.984 ms 00:30:02.219 [2024-12-11 14:10:54.968394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.171117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.350 [2024-12-11 14:11:02.171190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:10.350 [2024-12-11 14:11:02.171332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7214.374 ms 00:30:10.350 [2024-12-11 14:11:02.171344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.172445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.350 [2024-12-11 14:11:02.172476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:10.350 [2024-12-11 14:11:02.172489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.084 ms 00:30:10.350 [2024-12-11 14:11:02.172499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.173414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.350 [2024-12-11 14:11:02.173436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:10.350 [2024-12-11 14:11:02.173448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.888 ms 00:30:10.350 [2024-12-11 14:11:02.173464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.188510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.350 [2024-12-11 14:11:02.188547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:10.350 [2024-12-11 14:11:02.188559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.037 ms 00:30:10.350 [2024-12-11 14:11:02.188568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.197527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.350 [2024-12-11 14:11:02.197562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:10.350 [2024-12-11 14:11:02.197574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.923 ms 00:30:10.350 [2024-12-11 14:11:02.197584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.197691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.350 [2024-12-11 14:11:02.197710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:10.350 [2024-12-11 14:11:02.197721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:30:10.350 [2024-12-11 14:11:02.197731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.211654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.350 [2024-12-11 14:11:02.211692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:10.350 [2024-12-11 14:11:02.211704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.929 ms 00:30:10.350 [2024-12-11 14:11:02.211713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.225948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.350 [2024-12-11 14:11:02.225982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:10.350 [2024-12-11 14:11:02.225993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.209 ms 00:30:10.350 [2024-12-11 14:11:02.226002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.240324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.350 [2024-12-11 14:11:02.240359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:10.350 [2024-12-11 14:11:02.240371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.296 ms 00:30:10.350 [2024-12-11 14:11:02.240380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.254294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.350 [2024-12-11 14:11:02.254330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:10.350 [2024-12-11 14:11:02.254357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.848 ms 00:30:10.350 [2024-12-11 14:11:02.254366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.350 [2024-12-11 14:11:02.254397] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:10.350 [2024-12-11 14:11:02.254424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:10.350 [2024-12-11 14:11:02.254437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:10.350 [2024-12-11 14:11:02.254448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:10.350 [2024-12-11 14:11:02.254458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:10.350 [2024-12-11 14:11:02.254579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:10.351 [2024-12-11 14:11:02.254589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:10.351 [2024-12-11 14:11:02.254598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:10.351 [2024-12-11 14:11:02.254610] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:10.351 [2024-12-11 14:11:02.254620] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: dbe95b4b-8755-43da-acf2-d6b0cf13ae48 00:30:10.351 [2024-12-11 14:11:02.254630] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:10.351 [2024-12-11 14:11:02.254639] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:10.351 [2024-12-11 14:11:02.254649] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:10.351 [2024-12-11 14:11:02.254684] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:10.351 [2024-12-11 14:11:02.254697] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:10.351 [2024-12-11 14:11:02.254707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:10.351 [2024-12-11 14:11:02.254720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:10.351 [2024-12-11 14:11:02.254730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:10.351 [2024-12-11 14:11:02.254740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:10.351 [2024-12-11 14:11:02.254749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.351 [2024-12-11 14:11:02.254759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:10.351 [2024-12-11 14:11:02.254770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:30:10.351 [2024-12-11 14:11:02.254780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.273574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.351 [2024-12-11 14:11:02.273606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:10.351 [2024-12-11 14:11:02.273641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.807 ms 00:30:10.351 [2024-12-11 14:11:02.273651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.274258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.351 [2024-12-11 14:11:02.274281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:10.351 [2024-12-11 14:11:02.274292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.575 ms 00:30:10.351 [2024-12-11 14:11:02.274302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.337013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.337055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:10.351 [2024-12-11 14:11:02.337084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.337094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.337125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.337135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:10.351 [2024-12-11 14:11:02.337145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.337154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.337242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.337255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:10.351 [2024-12-11 14:11:02.337271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.337280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.337297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.337307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:10.351 [2024-12-11 14:11:02.337317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.337327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.458005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.458053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:10.351 [2024-12-11 14:11:02.458073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.458100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.552513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.552559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:10.351 [2024-12-11 14:11:02.552572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.552582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.552694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.552706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:10.351 [2024-12-11 14:11:02.552716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.552727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.552778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.552791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:10.351 [2024-12-11 14:11:02.552802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.552811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.552948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.552963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:10.351 [2024-12-11 14:11:02.552974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.552983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.553026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.553038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:10.351 [2024-12-11 14:11:02.553049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.553058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.553094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.553105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:10.351 [2024-12-11 14:11:02.553115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.553124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.553171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:10.351 [2024-12-11 14:11:02.553183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:10.351 [2024-12-11 14:11:02.553193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:10.351 [2024-12-11 14:11:02.553202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.351 [2024-12-11 14:11:02.553324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7601.366 ms, result 0 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85066 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85066 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85066 ']' 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.898 14:11:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:12.898 [2024-12-11 14:11:05.777268] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:12.898 [2024-12-11 14:11:05.777398] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85066 ] 00:30:13.157 [2024-12-11 14:11:05.960223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.157 [2024-12-11 14:11:06.067771] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.094 [2024-12-11 14:11:07.002779] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:14.094 [2024-12-11 14:11:07.002873] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:14.355 [2024-12-11 14:11:07.149964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.355 [2024-12-11 14:11:07.150006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:14.355 [2024-12-11 14:11:07.150021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:14.355 [2024-12-11 14:11:07.150031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.355 [2024-12-11 14:11:07.150086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.355 [2024-12-11 14:11:07.150099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:14.355 [2024-12-11 14:11:07.150110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:14.355 [2024-12-11 14:11:07.150120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.355 [2024-12-11 14:11:07.150157] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:14.355 [2024-12-11 14:11:07.151037] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:14.355 [2024-12-11 14:11:07.151067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.355 [2024-12-11 14:11:07.151078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:14.355 [2024-12-11 14:11:07.151089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.923 ms 00:30:14.355 [2024-12-11 14:11:07.151099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.355 [2024-12-11 14:11:07.152505] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:14.355 [2024-12-11 14:11:07.170646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.355 [2024-12-11 14:11:07.170681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:14.355 [2024-12-11 14:11:07.170701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.171 ms 00:30:14.355 [2024-12-11 14:11:07.170710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.355 [2024-12-11 14:11:07.170785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.355 [2024-12-11 14:11:07.170797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:14.355 [2024-12-11 14:11:07.170807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:14.355 [2024-12-11 14:11:07.170817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.355 [2024-12-11 14:11:07.177654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.355 [2024-12-11 14:11:07.177682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:14.355 [2024-12-11 14:11:07.177709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.761 ms 00:30:14.355 [2024-12-11 14:11:07.177719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.355 [2024-12-11 14:11:07.177776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.356 [2024-12-11 14:11:07.177790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:14.356 [2024-12-11 14:11:07.177800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:14.356 [2024-12-11 14:11:07.177810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.356 [2024-12-11 14:11:07.177861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.356 [2024-12-11 14:11:07.177878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:14.356 [2024-12-11 14:11:07.177889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:14.356 [2024-12-11 14:11:07.177898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.356 [2024-12-11 14:11:07.177923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:14.356 [2024-12-11 14:11:07.182688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.356 [2024-12-11 14:11:07.182717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:14.356 [2024-12-11 14:11:07.182728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.777 ms 00:30:14.356 [2024-12-11 14:11:07.182741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.356 [2024-12-11 14:11:07.182785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.356 [2024-12-11 14:11:07.182796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:14.356 [2024-12-11 14:11:07.182806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:14.356 [2024-12-11 14:11:07.182815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.356 [2024-12-11 14:11:07.182877] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:14.356 [2024-12-11 14:11:07.182903] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:14.356 [2024-12-11 14:11:07.182940] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:14.356 [2024-12-11 14:11:07.182958] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:14.356 [2024-12-11 14:11:07.183044] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:14.356 [2024-12-11 14:11:07.183057] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:14.356 [2024-12-11 14:11:07.183070] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:14.356 [2024-12-11 14:11:07.183083] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:14.356 [2024-12-11 14:11:07.183094] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:14.356 [2024-12-11 14:11:07.183124] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:14.356 [2024-12-11 14:11:07.183135] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:14.356 [2024-12-11 14:11:07.183145] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:14.356 [2024-12-11 14:11:07.183155] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:14.356 [2024-12-11 14:11:07.183165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.356 [2024-12-11 14:11:07.183174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:14.356 [2024-12-11 14:11:07.183185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.291 ms 00:30:14.356 [2024-12-11 14:11:07.183195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.356 [2024-12-11 14:11:07.183269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.356 [2024-12-11 14:11:07.183279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:14.356 [2024-12-11 14:11:07.183292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:14.356 [2024-12-11 14:11:07.183302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.356 [2024-12-11 14:11:07.183390] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:14.356 [2024-12-11 14:11:07.183404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:14.356 [2024-12-11 14:11:07.183414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:14.356 [2024-12-11 14:11:07.183424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:14.356 [2024-12-11 14:11:07.183444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:14.356 [2024-12-11 14:11:07.183467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:14.356 [2024-12-11 14:11:07.183476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:14.356 [2024-12-11 14:11:07.183485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:14.356 [2024-12-11 14:11:07.183503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:14.356 [2024-12-11 14:11:07.183512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:14.356 [2024-12-11 14:11:07.183531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:14.356 [2024-12-11 14:11:07.183540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:14.356 [2024-12-11 14:11:07.183558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:14.356 [2024-12-11 14:11:07.183567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:14.356 [2024-12-11 14:11:07.183585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:14.356 [2024-12-11 14:11:07.183594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.356 [2024-12-11 14:11:07.183603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:14.356 [2024-12-11 14:11:07.183623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:14.356 [2024-12-11 14:11:07.183632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.356 [2024-12-11 14:11:07.183641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:14.356 [2024-12-11 14:11:07.183651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:14.356 [2024-12-11 14:11:07.183660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.356 [2024-12-11 14:11:07.183668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:14.356 [2024-12-11 14:11:07.183677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:14.356 [2024-12-11 14:11:07.183687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.356 [2024-12-11 14:11:07.183695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:14.356 [2024-12-11 14:11:07.183705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:14.356 [2024-12-11 14:11:07.183714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:14.356 [2024-12-11 14:11:07.183732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:14.356 [2024-12-11 14:11:07.183741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:14.356 [2024-12-11 14:11:07.183762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:14.356 [2024-12-11 14:11:07.183789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:14.356 [2024-12-11 14:11:07.183797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183806] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:14.356 [2024-12-11 14:11:07.183816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:14.356 [2024-12-11 14:11:07.183825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:14.356 [2024-12-11 14:11:07.183835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.356 [2024-12-11 14:11:07.183871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:14.356 [2024-12-11 14:11:07.183881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:14.356 [2024-12-11 14:11:07.183890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:14.356 [2024-12-11 14:11:07.183899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:14.356 [2024-12-11 14:11:07.183908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:14.356 [2024-12-11 14:11:07.183917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:14.356 [2024-12-11 14:11:07.183928] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:14.356 [2024-12-11 14:11:07.183940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:14.356 [2024-12-11 14:11:07.183951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:14.356 [2024-12-11 14:11:07.183961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:14.356 [2024-12-11 14:11:07.183971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:14.356 [2024-12-11 14:11:07.183981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:14.356 [2024-12-11 14:11:07.183991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:14.356 [2024-12-11 14:11:07.184002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:14.356 [2024-12-11 14:11:07.184012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:14.356 [2024-12-11 14:11:07.184023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:14.356 [2024-12-11 14:11:07.184033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:14.356 [2024-12-11 14:11:07.184043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:14.357 [2024-12-11 14:11:07.184053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:14.357 [2024-12-11 14:11:07.184064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:14.357 [2024-12-11 14:11:07.184073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:14.357 [2024-12-11 14:11:07.184084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:14.357 [2024-12-11 14:11:07.184094] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:14.357 [2024-12-11 14:11:07.184107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:14.357 [2024-12-11 14:11:07.184118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:14.357 [2024-12-11 14:11:07.184128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:14.357 [2024-12-11 14:11:07.184138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:14.357 [2024-12-11 14:11:07.184148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:14.357 [2024-12-11 14:11:07.184159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.357 [2024-12-11 14:11:07.184169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:14.357 [2024-12-11 14:11:07.184179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.825 ms 00:30:14.357 [2024-12-11 14:11:07.184189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.357 [2024-12-11 14:11:07.184234] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:14.357 [2024-12-11 14:11:07.184246] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:18.553 [2024-12-11 14:11:10.927175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.553 [2024-12-11 14:11:10.927231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:18.553 [2024-12-11 14:11:10.927248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3749.016 ms 00:30:18.553 [2024-12-11 14:11:10.927259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.553 [2024-12-11 14:11:10.961732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.553 [2024-12-11 14:11:10.961782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:18.553 [2024-12-11 14:11:10.961797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.138 ms 00:30:18.553 [2024-12-11 14:11:10.961807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.553 [2024-12-11 14:11:10.961916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.553 [2024-12-11 14:11:10.961936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:18.553 [2024-12-11 14:11:10.961948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:18.553 [2024-12-11 14:11:10.961959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.553 [2024-12-11 14:11:11.004456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.553 [2024-12-11 14:11:11.004502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:18.553 [2024-12-11 14:11:11.004516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.505 ms 00:30:18.553 [2024-12-11 14:11:11.004529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.553 [2024-12-11 14:11:11.004567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.553 [2024-12-11 14:11:11.004578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:18.553 [2024-12-11 14:11:11.004589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:18.553 [2024-12-11 14:11:11.004598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.553 [2024-12-11 14:11:11.005094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.553 [2024-12-11 14:11:11.005116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:18.553 [2024-12-11 14:11:11.005127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.443 ms 00:30:18.553 [2024-12-11 14:11:11.005137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.553 [2024-12-11 14:11:11.005182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.553 [2024-12-11 14:11:11.005193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:18.553 [2024-12-11 14:11:11.005204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:18.553 [2024-12-11 14:11:11.005214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.553 [2024-12-11 14:11:11.023346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.553 [2024-12-11 14:11:11.023385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:18.553 [2024-12-11 14:11:11.023398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.138 ms 00:30:18.553 [2024-12-11 14:11:11.023425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.055011] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:18.554 [2024-12-11 14:11:11.055056] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:18.554 [2024-12-11 14:11:11.055071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.055099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:18.554 [2024-12-11 14:11:11.055111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.586 ms 00:30:18.554 [2024-12-11 14:11:11.055122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.073465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.073503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:18.554 [2024-12-11 14:11:11.073516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.329 ms 00:30:18.554 [2024-12-11 14:11:11.073525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.090209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.090256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:18.554 [2024-12-11 14:11:11.090269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.663 ms 00:30:18.554 [2024-12-11 14:11:11.090295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.107038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.107075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:18.554 [2024-12-11 14:11:11.107087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.730 ms 00:30:18.554 [2024-12-11 14:11:11.107096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.107729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.107753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:18.554 [2024-12-11 14:11:11.107765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.539 ms 00:30:18.554 [2024-12-11 14:11:11.107775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.186782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.186847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:18.554 [2024-12-11 14:11:11.186863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 79.113 ms 00:30:18.554 [2024-12-11 14:11:11.186875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.196602] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:18.554 [2024-12-11 14:11:11.197185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.197209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:18.554 [2024-12-11 14:11:11.197221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.283 ms 00:30:18.554 [2024-12-11 14:11:11.197231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.197313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.197328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:18.554 [2024-12-11 14:11:11.197339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:18.554 [2024-12-11 14:11:11.197349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.197409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.197420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:18.554 [2024-12-11 14:11:11.197431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:18.554 [2024-12-11 14:11:11.197441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.197462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.197473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:18.554 [2024-12-11 14:11:11.197487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:18.554 [2024-12-11 14:11:11.197497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.197530] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:18.554 [2024-12-11 14:11:11.197542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.197552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:18.554 [2024-12-11 14:11:11.197561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:18.554 [2024-12-11 14:11:11.197570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.231614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.231658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:18.554 [2024-12-11 14:11:11.231672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.078 ms 00:30:18.554 [2024-12-11 14:11:11.231682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.231749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.554 [2024-12-11 14:11:11.231761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:18.554 [2024-12-11 14:11:11.231772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:18.554 [2024-12-11 14:11:11.231782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.554 [2024-12-11 14:11:11.233093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4089.309 ms, result 0 00:30:18.554 [2024-12-11 14:11:11.247950] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.554 [2024-12-11 14:11:11.263948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:18.554 [2024-12-11 14:11:11.272472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:18.813 14:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.813 14:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:18.813 14:11:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:18.813 14:11:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:18.813 14:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:19.072 [2024-12-11 14:11:11.915957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.072 [2024-12-11 14:11:11.915996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:19.072 [2024-12-11 14:11:11.916013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:19.072 [2024-12-11 14:11:11.916023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.072 [2024-12-11 14:11:11.916045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.072 [2024-12-11 14:11:11.916055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:19.072 [2024-12-11 14:11:11.916065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:19.072 [2024-12-11 14:11:11.916074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.072 [2024-12-11 14:11:11.916093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.072 [2024-12-11 14:11:11.916104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:19.072 [2024-12-11 14:11:11.916113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:19.072 [2024-12-11 14:11:11.916123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.072 [2024-12-11 14:11:11.916174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.213 ms, result 0 00:30:19.072 true 00:30:19.072 14:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:19.332 { 00:30:19.332 "name": "ftl", 00:30:19.332 "properties": [ 00:30:19.332 { 00:30:19.332 "name": "superblock_version", 00:30:19.332 "value": 5, 00:30:19.332 "read-only": true 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "name": "base_device", 00:30:19.332 "bands": [ 00:30:19.332 { 00:30:19.332 "id": 0, 00:30:19.332 "state": "CLOSED", 00:30:19.332 "validity": 1.0 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "id": 1, 00:30:19.332 "state": "CLOSED", 00:30:19.332 "validity": 1.0 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "id": 2, 00:30:19.332 "state": "CLOSED", 00:30:19.332 "validity": 0.007843137254901933 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "id": 3, 00:30:19.332 "state": "FREE", 00:30:19.332 "validity": 0.0 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "id": 4, 00:30:19.332 "state": "FREE", 00:30:19.332 "validity": 0.0 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "id": 5, 00:30:19.332 "state": "FREE", 00:30:19.332 "validity": 0.0 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "id": 6, 00:30:19.332 "state": "FREE", 00:30:19.332 "validity": 0.0 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "id": 7, 00:30:19.332 "state": "FREE", 00:30:19.332 "validity": 0.0 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "id": 8, 00:30:19.332 "state": "FREE", 00:30:19.332 "validity": 0.0 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "id": 9, 00:30:19.332 "state": "FREE", 00:30:19.332 "validity": 0.0 00:30:19.332 }, 00:30:19.332 { 00:30:19.332 "id": 10, 00:30:19.332 "state": "FREE", 00:30:19.332 "validity": 0.0 00:30:19.332 }, 00:30:19.333 { 00:30:19.333 "id": 11, 00:30:19.333 "state": "FREE", 00:30:19.333 "validity": 0.0 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "id": 12, 00:30:19.333 "state": "FREE", 00:30:19.333 "validity": 0.0 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "id": 13, 00:30:19.333 "state": "FREE", 00:30:19.333 "validity": 0.0 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "id": 14, 00:30:19.333 "state": "FREE", 00:30:19.333 "validity": 0.0 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "id": 15, 00:30:19.333 "state": "FREE", 00:30:19.333 "validity": 0.0 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "id": 16, 00:30:19.333 "state": "FREE", 00:30:19.333 "validity": 0.0 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "id": 17, 00:30:19.333 "state": "FREE", 00:30:19.333 "validity": 0.0 00:30:19.333 } 00:30:19.333 ], 00:30:19.333 "read-only": true 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "name": "cache_device", 00:30:19.333 "type": "bdev", 00:30:19.333 "chunks": [ 00:30:19.333 { 00:30:19.333 "id": 0, 00:30:19.333 "state": "INACTIVE", 00:30:19.333 "utilization": 0.0 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "id": 1, 00:30:19.333 "state": "OPEN", 00:30:19.333 "utilization": 0.0 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "id": 2, 00:30:19.333 "state": "OPEN", 00:30:19.333 "utilization": 0.0 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "id": 3, 00:30:19.333 "state": "FREE", 00:30:19.333 "utilization": 0.0 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "id": 4, 00:30:19.333 "state": "FREE", 00:30:19.333 "utilization": 0.0 00:30:19.333 } 00:30:19.333 ], 00:30:19.333 "read-only": true 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "name": "verbose_mode", 00:30:19.333 "value": true, 00:30:19.333 "unit": "", 00:30:19.333 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:19.333 }, 00:30:19.333 { 00:30:19.333 "name": "prep_upgrade_on_shutdown", 00:30:19.333 "value": false, 00:30:19.333 "unit": "", 00:30:19.333 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:19.333 } 00:30:19.333 ] 00:30:19.333 } 00:30:19.333 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:19.333 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:19.333 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:19.333 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:19.333 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:19.333 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:19.333 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:19.333 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:19.593 Validate MD5 checksum, iteration 1 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:19.593 14:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:19.853 [2024-12-11 14:11:12.676776] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:19.853 [2024-12-11 14:11:12.676909] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85153 ] 00:30:19.853 [2024-12-11 14:11:12.859881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.112 [2024-12-11 14:11:12.966891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.019  [2024-12-11T14:11:15.066Z] Copying: 714/1024 [MB] (714 MBps) [2024-12-11T14:11:16.972Z] Copying: 1024/1024 [MB] (average 708 MBps) 00:30:23.925 00:30:23.925 14:11:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:23.925 14:11:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:25.304 Validate MD5 checksum, iteration 2 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6d82c68f0392dfbe62ebb1fce2778f28 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6d82c68f0392dfbe62ebb1fce2778f28 != \6\d\8\2\c\6\8\f\0\3\9\2\d\f\b\e\6\2\e\b\b\1\f\c\e\2\7\7\8\f\2\8 ]] 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:25.304 14:11:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:25.563 [2024-12-11 14:11:18.360307] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:25.563 [2024-12-11 14:11:18.360432] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85219 ] 00:30:25.563 [2024-12-11 14:11:18.545701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.822 [2024-12-11 14:11:18.672660] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.728  [2024-12-11T14:11:21.034Z] Copying: 654/1024 [MB] (654 MBps) [2024-12-11T14:11:24.320Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:30:31.273 00:30:31.273 14:11:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:31.273 14:11:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2705dc1017c2707dff658ec23b0b2efb 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2705dc1017c2707dff658ec23b0b2efb != \2\7\0\5\d\c\1\0\1\7\c\2\7\0\7\d\f\f\6\5\8\e\c\2\3\b\0\b\2\e\f\b ]] 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85066 ]] 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85066 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85293 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85293 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85293 ']' 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.651 14:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.652 14:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.652 14:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.652 14:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:32.910 [2024-12-11 14:11:25.736062] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:32.910 [2024-12-11 14:11:25.736192] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85293 ] 00:30:32.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 85066 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:32.910 [2024-12-11 14:11:25.923575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.168 [2024-12-11 14:11:26.024554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.106 [2024-12-11 14:11:26.949082] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:34.106 [2024-12-11 14:11:26.949148] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:34.106 [2024-12-11 14:11:27.095082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.095130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:34.106 [2024-12-11 14:11:27.095146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:34.106 [2024-12-11 14:11:27.095156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.095237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.095250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:34.106 [2024-12-11 14:11:27.095261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:30:34.106 [2024-12-11 14:11:27.095271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.095300] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:34.106 [2024-12-11 14:11:27.096279] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:34.106 [2024-12-11 14:11:27.096309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.096321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:34.106 [2024-12-11 14:11:27.096332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.022 ms 00:30:34.106 [2024-12-11 14:11:27.096342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.096684] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:34.106 [2024-12-11 14:11:27.120013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.120056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:34.106 [2024-12-11 14:11:27.120087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.367 ms 00:30:34.106 [2024-12-11 14:11:27.120098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.133558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.133600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:34.106 [2024-12-11 14:11:27.133612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:34.106 [2024-12-11 14:11:27.133621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.134129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.134158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:34.106 [2024-12-11 14:11:27.134170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.416 ms 00:30:34.106 [2024-12-11 14:11:27.134180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.134242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.134255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:34.106 [2024-12-11 14:11:27.134266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:34.106 [2024-12-11 14:11:27.134275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.134301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.134311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:34.106 [2024-12-11 14:11:27.134321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:34.106 [2024-12-11 14:11:27.134332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.134353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:34.106 [2024-12-11 14:11:27.138225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.138255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:34.106 [2024-12-11 14:11:27.138267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.883 ms 00:30:34.106 [2024-12-11 14:11:27.138277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.138314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.138325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:34.106 [2024-12-11 14:11:27.138335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:34.106 [2024-12-11 14:11:27.138345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.138380] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:34.106 [2024-12-11 14:11:27.138403] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:34.106 [2024-12-11 14:11:27.138437] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:34.106 [2024-12-11 14:11:27.138457] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:34.106 [2024-12-11 14:11:27.138546] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:34.106 [2024-12-11 14:11:27.138559] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:34.106 [2024-12-11 14:11:27.138572] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:34.106 [2024-12-11 14:11:27.138586] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:34.106 [2024-12-11 14:11:27.138597] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:34.106 [2024-12-11 14:11:27.138608] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:34.106 [2024-12-11 14:11:27.138618] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:34.106 [2024-12-11 14:11:27.138627] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:34.106 [2024-12-11 14:11:27.138637] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:34.106 [2024-12-11 14:11:27.138647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.138661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:34.106 [2024-12-11 14:11:27.138671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:30:34.106 [2024-12-11 14:11:27.138680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.138751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.106 [2024-12-11 14:11:27.138762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:34.106 [2024-12-11 14:11:27.138772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:30:34.106 [2024-12-11 14:11:27.138781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.106 [2024-12-11 14:11:27.138880] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:34.106 [2024-12-11 14:11:27.138892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:34.106 [2024-12-11 14:11:27.138906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:34.106 [2024-12-11 14:11:27.138916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.106 [2024-12-11 14:11:27.138926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:34.106 [2024-12-11 14:11:27.138936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:34.106 [2024-12-11 14:11:27.138946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:34.106 [2024-12-11 14:11:27.138955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:34.106 [2024-12-11 14:11:27.138965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:34.106 [2024-12-11 14:11:27.138974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.106 [2024-12-11 14:11:27.138983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:34.107 [2024-12-11 14:11:27.138993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:34.107 [2024-12-11 14:11:27.139002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.107 [2024-12-11 14:11:27.139011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:34.107 [2024-12-11 14:11:27.139020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:34.107 [2024-12-11 14:11:27.139029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.107 [2024-12-11 14:11:27.139039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:34.107 [2024-12-11 14:11:27.139048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:34.107 [2024-12-11 14:11:27.139057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.107 [2024-12-11 14:11:27.139066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:34.107 [2024-12-11 14:11:27.139075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:34.107 [2024-12-11 14:11:27.139096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.107 [2024-12-11 14:11:27.139105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:34.107 [2024-12-11 14:11:27.139114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:34.107 [2024-12-11 14:11:27.139123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.107 [2024-12-11 14:11:27.139132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:34.107 [2024-12-11 14:11:27.139141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:34.107 [2024-12-11 14:11:27.139150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.107 [2024-12-11 14:11:27.139159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:34.107 [2024-12-11 14:11:27.139169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:34.107 [2024-12-11 14:11:27.139178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.107 [2024-12-11 14:11:27.139187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:34.107 [2024-12-11 14:11:27.139196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:34.107 [2024-12-11 14:11:27.139205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.107 [2024-12-11 14:11:27.139214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:34.107 [2024-12-11 14:11:27.139223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:34.107 [2024-12-11 14:11:27.139232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.107 [2024-12-11 14:11:27.139242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:34.107 [2024-12-11 14:11:27.139251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:34.107 [2024-12-11 14:11:27.139259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.107 [2024-12-11 14:11:27.139268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:34.107 [2024-12-11 14:11:27.139277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:34.107 [2024-12-11 14:11:27.139286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.107 [2024-12-11 14:11:27.139295] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:34.107 [2024-12-11 14:11:27.139305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:34.107 [2024-12-11 14:11:27.139315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:34.107 [2024-12-11 14:11:27.139324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.107 [2024-12-11 14:11:27.139334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:34.107 [2024-12-11 14:11:27.139344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:34.107 [2024-12-11 14:11:27.139353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:34.107 [2024-12-11 14:11:27.139362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:34.107 [2024-12-11 14:11:27.139371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:34.107 [2024-12-11 14:11:27.139380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:34.107 [2024-12-11 14:11:27.139390] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:34.107 [2024-12-11 14:11:27.139402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:34.107 [2024-12-11 14:11:27.139424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:34.107 [2024-12-11 14:11:27.139455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:34.107 [2024-12-11 14:11:27.139465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:34.107 [2024-12-11 14:11:27.139475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:34.107 [2024-12-11 14:11:27.139485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:34.107 [2024-12-11 14:11:27.139556] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:34.107 [2024-12-11 14:11:27.139568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:34.107 [2024-12-11 14:11:27.139594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:34.107 [2024-12-11 14:11:27.139604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:34.107 [2024-12-11 14:11:27.139614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:34.107 [2024-12-11 14:11:27.139625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.107 [2024-12-11 14:11:27.139635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:34.107 [2024-12-11 14:11:27.139646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.813 ms 00:30:34.107 [2024-12-11 14:11:27.139655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.173306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.173347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:34.367 [2024-12-11 14:11:27.173376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.658 ms 00:30:34.367 [2024-12-11 14:11:27.173387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.173424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.173435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:34.367 [2024-12-11 14:11:27.173446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:34.367 [2024-12-11 14:11:27.173456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.218552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.218591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:34.367 [2024-12-11 14:11:27.218604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.113 ms 00:30:34.367 [2024-12-11 14:11:27.218614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.218660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.218671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:34.367 [2024-12-11 14:11:27.218682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:34.367 [2024-12-11 14:11:27.218696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.218821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.218834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:34.367 [2024-12-11 14:11:27.218869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:34.367 [2024-12-11 14:11:27.218879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.218922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.218932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:34.367 [2024-12-11 14:11:27.218943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:34.367 [2024-12-11 14:11:27.218953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.238057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.238092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:34.367 [2024-12-11 14:11:27.238120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.106 ms 00:30:34.367 [2024-12-11 14:11:27.238133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.238256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.238272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:34.367 [2024-12-11 14:11:27.238283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:34.367 [2024-12-11 14:11:27.238293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.272104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.272142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:34.367 [2024-12-11 14:11:27.272156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.841 ms 00:30:34.367 [2024-12-11 14:11:27.272192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.286335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.286369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:34.367 [2024-12-11 14:11:27.286389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.641 ms 00:30:34.367 [2024-12-11 14:11:27.286398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.365786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.365858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:34.367 [2024-12-11 14:11:27.365876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 79.439 ms 00:30:34.367 [2024-12-11 14:11:27.365902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.366078] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:34.367 [2024-12-11 14:11:27.366204] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:34.367 [2024-12-11 14:11:27.366327] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:34.367 [2024-12-11 14:11:27.366433] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:34.367 [2024-12-11 14:11:27.366445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.366456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:34.367 [2024-12-11 14:11:27.366467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.497 ms 00:30:34.367 [2024-12-11 14:11:27.366477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.366561] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:34.367 [2024-12-11 14:11:27.366576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.366590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:34.367 [2024-12-11 14:11:27.366601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:34.367 [2024-12-11 14:11:27.366611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.388008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.388053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:34.367 [2024-12-11 14:11:27.388082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.385 ms 00:30:34.367 [2024-12-11 14:11:27.388093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.401144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.401177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:34.367 [2024-12-11 14:11:27.401205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:34.367 [2024-12-11 14:11:27.401215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.367 [2024-12-11 14:11:27.401306] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:34.367 [2024-12-11 14:11:27.401495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.367 [2024-12-11 14:11:27.401521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:34.367 [2024-12-11 14:11:27.401532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:30:34.367 [2024-12-11 14:11:27.401541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.305 [2024-12-11 14:11:28.011317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.305 [2024-12-11 14:11:28.011390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:35.305 [2024-12-11 14:11:28.011408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 609.654 ms 00:30:35.305 [2024-12-11 14:11:28.011419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.305 [2024-12-11 14:11:28.017704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.305 [2024-12-11 14:11:28.017747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:35.305 [2024-12-11 14:11:28.017760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.458 ms 00:30:35.305 [2024-12-11 14:11:28.017771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.305 [2024-12-11 14:11:28.029602] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:35.305 [2024-12-11 14:11:28.029643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.305 [2024-12-11 14:11:28.029655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:35.305 [2024-12-11 14:11:28.029667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.854 ms 00:30:35.305 [2024-12-11 14:11:28.029678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.305 [2024-12-11 14:11:28.029715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.305 [2024-12-11 14:11:28.029727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:35.305 [2024-12-11 14:11:28.029738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:35.305 [2024-12-11 14:11:28.029755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.305 [2024-12-11 14:11:28.029792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 629.505 ms, result 0 00:30:35.305 [2024-12-11 14:11:28.029845] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:35.305 [2024-12-11 14:11:28.029932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.305 [2024-12-11 14:11:28.029941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:35.305 [2024-12-11 14:11:28.029951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:30:35.305 [2024-12-11 14:11:28.029961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.635778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.635857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:35.875 [2024-12-11 14:11:28.635892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 605.570 ms 00:30:35.875 [2024-12-11 14:11:28.635903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.641990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.642032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:35.875 [2024-12-11 14:11:28.642045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.548 ms 00:30:35.875 [2024-12-11 14:11:28.642057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.642665] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:35.875 [2024-12-11 14:11:28.642696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.642707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:35.875 [2024-12-11 14:11:28.642718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.612 ms 00:30:35.875 [2024-12-11 14:11:28.642729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.642760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.642771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:35.875 [2024-12-11 14:11:28.642782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:35.875 [2024-12-11 14:11:28.642792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.642839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 613.988 ms, result 0 00:30:35.875 [2024-12-11 14:11:28.642881] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:35.875 [2024-12-11 14:11:28.642893] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:35.875 [2024-12-11 14:11:28.642906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.642917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:35.875 [2024-12-11 14:11:28.642927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1243.638 ms 00:30:35.875 [2024-12-11 14:11:28.642937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.642966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.642981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:35.875 [2024-12-11 14:11:28.642992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:35.875 [2024-12-11 14:11:28.643001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.654132] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:35.875 [2024-12-11 14:11:28.654292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.654305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:35.875 [2024-12-11 14:11:28.654317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.289 ms 00:30:35.875 [2024-12-11 14:11:28.654327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.654928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.654952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:35.875 [2024-12-11 14:11:28.654968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:30:35.875 [2024-12-11 14:11:28.654978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.656957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.656983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:35.875 [2024-12-11 14:11:28.656994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.963 ms 00:30:35.875 [2024-12-11 14:11:28.657004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.657054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.657066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:35.875 [2024-12-11 14:11:28.657077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:35.875 [2024-12-11 14:11:28.657092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.657186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.657198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:35.875 [2024-12-11 14:11:28.657208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:35.875 [2024-12-11 14:11:28.657217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.657238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.657248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:35.875 [2024-12-11 14:11:28.657258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:35.875 [2024-12-11 14:11:28.657268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.875 [2024-12-11 14:11:28.657308] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:35.875 [2024-12-11 14:11:28.657319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.875 [2024-12-11 14:11:28.657345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:35.875 [2024-12-11 14:11:28.657355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:35.876 [2024-12-11 14:11:28.657365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.876 [2024-12-11 14:11:28.657412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.876 [2024-12-11 14:11:28.657423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:35.876 [2024-12-11 14:11:28.657433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:35.876 [2024-12-11 14:11:28.657443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.876 [2024-12-11 14:11:28.658524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1565.540 ms, result 0 00:30:35.876 [2024-12-11 14:11:28.670844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.876 [2024-12-11 14:11:28.686813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:35.876 [2024-12-11 14:11:28.696200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:35.876 Validate MD5 checksum, iteration 1 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:35.876 14:11:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:35.876 [2024-12-11 14:11:28.833376] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:35.876 [2024-12-11 14:11:28.833498] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85333 ] 00:30:36.135 [2024-12-11 14:11:29.015392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.135 [2024-12-11 14:11:29.141635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.038  [2024-12-11T14:11:31.654Z] Copying: 646/1024 [MB] (646 MBps) [2024-12-11T14:11:34.187Z] Copying: 1024/1024 [MB] (average 639 MBps) 00:30:41.140 00:30:41.141 14:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:41.141 14:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:43.046 Validate MD5 checksum, iteration 2 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6d82c68f0392dfbe62ebb1fce2778f28 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6d82c68f0392dfbe62ebb1fce2778f28 != \6\d\8\2\c\6\8\f\0\3\9\2\d\f\b\e\6\2\e\b\b\1\f\c\e\2\7\7\8\f\2\8 ]] 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:43.046 14:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:43.046 [2024-12-11 14:11:35.701909] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:43.046 [2024-12-11 14:11:35.702044] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85406 ] 00:30:43.046 [2024-12-11 14:11:35.881543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.046 [2024-12-11 14:11:36.012718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.953  [2024-12-11T14:11:38.568Z] Copying: 645/1024 [MB] (645 MBps) [2024-12-11T14:11:39.948Z] Copying: 1024/1024 [MB] (average 643 MBps) 00:30:46.901 00:30:46.901 14:11:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:46.901 14:11:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2705dc1017c2707dff658ec23b0b2efb 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2705dc1017c2707dff658ec23b0b2efb != \2\7\0\5\d\c\1\0\1\7\c\2\7\0\7\d\f\f\6\5\8\e\c\2\3\b\0\b\2\e\f\b ]] 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85293 ]] 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85293 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85293 ']' 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85293 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85293 00:30:48.807 killing process with pid 85293 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85293' 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85293 00:30:48.807 14:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85293 00:30:49.786 [2024-12-11 14:11:42.659916] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:49.786 [2024-12-11 14:11:42.678291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.786 [2024-12-11 14:11:42.678327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:49.786 [2024-12-11 14:11:42.678342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:49.786 [2024-12-11 14:11:42.678352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.786 [2024-12-11 14:11:42.678375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:49.786 [2024-12-11 14:11:42.682477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.786 [2024-12-11 14:11:42.682506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:49.786 [2024-12-11 14:11:42.682523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.092 ms 00:30:49.786 [2024-12-11 14:11:42.682533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.786 [2024-12-11 14:11:42.682745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.786 [2024-12-11 14:11:42.682758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:49.786 [2024-12-11 14:11:42.682769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:30:49.786 [2024-12-11 14:11:42.682779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.786 [2024-12-11 14:11:42.683994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.786 [2024-12-11 14:11:42.684019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:49.786 [2024-12-11 14:11:42.684031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.200 ms 00:30:49.786 [2024-12-11 14:11:42.684047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.786 [2024-12-11 14:11:42.684991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.786 [2024-12-11 14:11:42.685014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:49.786 [2024-12-11 14:11:42.685025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.898 ms 00:30:49.786 [2024-12-11 14:11:42.685036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.786 [2024-12-11 14:11:42.700242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.787 [2024-12-11 14:11:42.700272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:49.787 [2024-12-11 14:11:42.700284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.195 ms 00:30:49.787 [2024-12-11 14:11:42.700316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.787 [2024-12-11 14:11:42.708162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.787 [2024-12-11 14:11:42.708192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:49.787 [2024-12-11 14:11:42.708204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.822 ms 00:30:49.787 [2024-12-11 14:11:42.708214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.787 [2024-12-11 14:11:42.708281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.787 [2024-12-11 14:11:42.708292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:49.787 [2024-12-11 14:11:42.708302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:49.787 [2024-12-11 14:11:42.708317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.787 [2024-12-11 14:11:42.722588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.787 [2024-12-11 14:11:42.722635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:49.787 [2024-12-11 14:11:42.722648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.277 ms 00:30:49.787 [2024-12-11 14:11:42.722674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.787 [2024-12-11 14:11:42.737138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.787 [2024-12-11 14:11:42.737172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:49.787 [2024-12-11 14:11:42.737183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.452 ms 00:30:49.787 [2024-12-11 14:11:42.737192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.787 [2024-12-11 14:11:42.751337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.787 [2024-12-11 14:11:42.751500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:49.787 [2024-12-11 14:11:42.751520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.135 ms 00:30:49.787 [2024-12-11 14:11:42.751531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.787 [2024-12-11 14:11:42.765488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.787 [2024-12-11 14:11:42.765612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:49.787 [2024-12-11 14:11:42.765647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.908 ms 00:30:49.787 [2024-12-11 14:11:42.765657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.787 [2024-12-11 14:11:42.765723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:49.787 [2024-12-11 14:11:42.765738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:49.787 [2024-12-11 14:11:42.765751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:49.787 [2024-12-11 14:11:42.765761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:49.787 [2024-12-11 14:11:42.765772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:49.787 [2024-12-11 14:11:42.765963] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:49.787 [2024-12-11 14:11:42.765973] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: dbe95b4b-8755-43da-acf2-d6b0cf13ae48 00:30:49.787 [2024-12-11 14:11:42.765985] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:49.787 [2024-12-11 14:11:42.765994] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:49.787 [2024-12-11 14:11:42.766004] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:49.787 [2024-12-11 14:11:42.766014] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:49.787 [2024-12-11 14:11:42.766024] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:49.787 [2024-12-11 14:11:42.766034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:49.787 [2024-12-11 14:11:42.766049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:49.787 [2024-12-11 14:11:42.766059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:49.787 [2024-12-11 14:11:42.766069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:49.787 [2024-12-11 14:11:42.766079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.787 [2024-12-11 14:11:42.766091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:49.787 [2024-12-11 14:11:42.766101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.358 ms 00:30:49.787 [2024-12-11 14:11:42.766111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.787 [2024-12-11 14:11:42.784948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.787 [2024-12-11 14:11:42.785062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:49.787 [2024-12-11 14:11:42.785148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.836 ms 00:30:49.787 [2024-12-11 14:11:42.785182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.787 [2024-12-11 14:11:42.785760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.787 [2024-12-11 14:11:42.785796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:49.787 [2024-12-11 14:11:42.786001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 00:30:49.787 [2024-12-11 14:11:42.786039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:42.848150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:42.848305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:50.047 [2024-12-11 14:11:42.848418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:42.848462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:42.848513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:42.848546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:50.047 [2024-12-11 14:11:42.848576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:42.848606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:42.848700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:42.848804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:50.047 [2024-12-11 14:11:42.848878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:42.848909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:42.848959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:42.848992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:50.047 [2024-12-11 14:11:42.849021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:42.849060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:42.966002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:42.966236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:50.047 [2024-12-11 14:11:42.966395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:42.966432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:43.061628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:43.061810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:50.047 [2024-12-11 14:11:43.061952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:43.061993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:43.062109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:43.062144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:50.047 [2024-12-11 14:11:43.062190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:43.062277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:43.062363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:43.062414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:50.047 [2024-12-11 14:11:43.062445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:43.062571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:43.062706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:43.062741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:50.047 [2024-12-11 14:11:43.062816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:43.062864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:43.062935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:43.062969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:50.047 [2024-12-11 14:11:43.063005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:43.063084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:43.063150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:43.063182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:50.047 [2024-12-11 14:11:43.063212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:43.063241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:43.063301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.047 [2024-12-11 14:11:43.063357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:50.047 [2024-12-11 14:11:43.063387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.047 [2024-12-11 14:11:43.063416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.047 [2024-12-11 14:11:43.063552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 385.854 ms, result 0 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:51.427 Remove shared memory files 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85066 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:51.427 ************************************ 00:30:51.427 END TEST ftl_upgrade_shutdown 00:30:51.427 ************************************ 00:30:51.427 00:30:51.427 real 1m27.532s 00:30:51.427 user 1m58.786s 00:30:51.427 sys 0m23.698s 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:51.427 14:11:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:51.427 14:11:44 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:30:51.427 14:11:44 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:51.427 14:11:44 ftl -- ftl/ftl.sh@14 -- # killprocess 77774 00:30:51.427 14:11:44 ftl -- common/autotest_common.sh@954 -- # '[' -z 77774 ']' 00:30:51.427 14:11:44 ftl -- common/autotest_common.sh@958 -- # kill -0 77774 00:30:51.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77774) - No such process 00:30:51.427 Process with pid 77774 is not found 00:30:51.428 14:11:44 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77774 is not found' 00:30:51.428 14:11:44 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:51.428 14:11:44 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85525 00:30:51.428 14:11:44 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.428 14:11:44 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85525 00:30:51.428 14:11:44 ftl -- common/autotest_common.sh@835 -- # '[' -z 85525 ']' 00:30:51.428 14:11:44 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.428 14:11:44 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.428 14:11:44 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.428 14:11:44 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.428 14:11:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:51.687 [2024-12-11 14:11:44.482851] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:51.687 [2024-12-11 14:11:44.482982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85525 ] 00:30:51.687 [2024-12-11 14:11:44.653550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.947 [2024-12-11 14:11:44.761476] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.883 14:11:45 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.883 14:11:45 ftl -- common/autotest_common.sh@868 -- # return 0 00:30:52.883 14:11:45 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:52.883 nvme0n1 00:30:52.883 14:11:45 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:52.883 14:11:45 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:52.883 14:11:45 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:53.142 14:11:46 ftl -- ftl/common.sh@28 -- # stores=d0b17b41-0ba7-4dd6-8a79-69dcc24e10d6 00:30:53.142 14:11:46 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:53.142 14:11:46 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d0b17b41-0ba7-4dd6-8a79-69dcc24e10d6 00:30:53.401 14:11:46 ftl -- ftl/ftl.sh@23 -- # killprocess 85525 00:30:53.401 14:11:46 ftl -- common/autotest_common.sh@954 -- # '[' -z 85525 ']' 00:30:53.401 14:11:46 ftl -- common/autotest_common.sh@958 -- # kill -0 85525 00:30:53.401 14:11:46 ftl -- common/autotest_common.sh@959 -- # uname 00:30:53.401 14:11:46 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.401 14:11:46 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85525 00:30:53.401 killing process with pid 85525 00:30:53.401 14:11:46 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:53.401 14:11:46 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:53.401 14:11:46 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85525' 00:30:53.401 14:11:46 ftl -- common/autotest_common.sh@973 -- # kill 85525 00:30:53.401 14:11:46 ftl -- common/autotest_common.sh@978 -- # wait 85525 00:30:55.939 14:11:48 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:55.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:56.205 Waiting for block devices as requested 00:30:56.205 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:56.205 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:56.465 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:56.465 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:01.746 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:01.746 Remove shared memory files 00:31:01.746 14:11:54 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:01.746 14:11:54 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:01.746 14:11:54 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:01.746 14:11:54 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:01.746 14:11:54 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:01.746 14:11:54 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:01.746 14:11:54 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:01.746 ************************************ 00:31:01.746 END TEST ftl 00:31:01.746 ************************************ 00:31:01.746 00:31:01.746 real 11m31.242s 00:31:01.746 user 14m6.343s 00:31:01.746 sys 1m31.110s 00:31:01.746 14:11:54 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.746 14:11:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:01.746 14:11:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:01.746 14:11:54 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:01.746 14:11:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:01.746 14:11:54 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:01.746 14:11:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:01.746 14:11:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:01.746 14:11:54 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:01.746 14:11:54 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:01.746 14:11:54 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:01.746 14:11:54 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:01.746 14:11:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.746 14:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:01.746 14:11:54 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:01.746 14:11:54 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:01.746 14:11:54 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:01.746 14:11:54 -- common/autotest_common.sh@10 -- # set +x 00:31:04.285 INFO: APP EXITING 00:31:04.285 INFO: killing all VMs 00:31:04.285 INFO: killing vhost app 00:31:04.285 INFO: EXIT DONE 00:31:04.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:05.115 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:05.115 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:05.115 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:05.115 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:05.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:05.945 Cleaning 00:31:05.945 Removing: /var/run/dpdk/spdk0/config 00:31:05.945 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:05.945 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:05.945 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:05.945 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:05.946 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:05.946 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:05.946 Removing: /var/run/dpdk/spdk0 00:31:05.946 Removing: /var/run/dpdk/spdk_pid58668 00:31:05.946 Removing: /var/run/dpdk/spdk_pid58903 00:31:05.946 Removing: /var/run/dpdk/spdk_pid59138 00:31:05.946 Removing: /var/run/dpdk/spdk_pid59242 00:31:05.946 Removing: /var/run/dpdk/spdk_pid59298 00:31:05.946 Removing: /var/run/dpdk/spdk_pid59426 00:31:05.946 Removing: /var/run/dpdk/spdk_pid59444 00:31:05.946 Removing: /var/run/dpdk/spdk_pid59654 00:31:05.946 Removing: /var/run/dpdk/spdk_pid59771 00:31:05.946 Removing: /var/run/dpdk/spdk_pid59878 00:31:05.946 Removing: /var/run/dpdk/spdk_pid60000 00:31:05.946 Removing: /var/run/dpdk/spdk_pid60108 00:31:05.946 Removing: /var/run/dpdk/spdk_pid60148 00:31:06.206 Removing: /var/run/dpdk/spdk_pid60185 00:31:06.206 Removing: /var/run/dpdk/spdk_pid60260 00:31:06.206 Removing: /var/run/dpdk/spdk_pid60376 00:31:06.206 Removing: /var/run/dpdk/spdk_pid60819 00:31:06.206 Removing: /var/run/dpdk/spdk_pid60900 00:31:06.206 Removing: /var/run/dpdk/spdk_pid60976 00:31:06.206 Removing: /var/run/dpdk/spdk_pid60992 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61146 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61168 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61320 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61336 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61411 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61429 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61494 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61517 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61712 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61754 00:31:06.206 Removing: /var/run/dpdk/spdk_pid61842 00:31:06.206 Removing: /var/run/dpdk/spdk_pid62026 00:31:06.206 Removing: /var/run/dpdk/spdk_pid62124 00:31:06.206 Removing: /var/run/dpdk/spdk_pid62171 00:31:06.206 Removing: /var/run/dpdk/spdk_pid62620 00:31:06.206 Removing: /var/run/dpdk/spdk_pid62724 00:31:06.206 Removing: /var/run/dpdk/spdk_pid62833 00:31:06.206 Removing: /var/run/dpdk/spdk_pid62891 00:31:06.206 Removing: /var/run/dpdk/spdk_pid62912 00:31:06.206 Removing: /var/run/dpdk/spdk_pid62996 00:31:06.206 Removing: /var/run/dpdk/spdk_pid63645 00:31:06.206 Removing: /var/run/dpdk/spdk_pid63687 00:31:06.206 Removing: /var/run/dpdk/spdk_pid64180 00:31:06.206 Removing: /var/run/dpdk/spdk_pid64285 00:31:06.206 Removing: /var/run/dpdk/spdk_pid64400 00:31:06.206 Removing: /var/run/dpdk/spdk_pid64453 00:31:06.206 Removing: /var/run/dpdk/spdk_pid64473 00:31:06.206 Removing: /var/run/dpdk/spdk_pid64504 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66399 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66548 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66552 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66564 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66615 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66619 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66631 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66676 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66680 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66692 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66742 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66746 00:31:06.206 Removing: /var/run/dpdk/spdk_pid66758 00:31:06.206 Removing: /var/run/dpdk/spdk_pid68180 00:31:06.206 Removing: /var/run/dpdk/spdk_pid68289 00:31:06.206 Removing: /var/run/dpdk/spdk_pid69725 00:31:06.466 Removing: /var/run/dpdk/spdk_pid71472 00:31:06.466 Removing: /var/run/dpdk/spdk_pid71553 00:31:06.466 Removing: /var/run/dpdk/spdk_pid71634 00:31:06.466 Removing: /var/run/dpdk/spdk_pid71743 00:31:06.466 Removing: /var/run/dpdk/spdk_pid71842 00:31:06.466 Removing: /var/run/dpdk/spdk_pid71943 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72027 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72103 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72213 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72311 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72413 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72498 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72573 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72683 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72781 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72877 00:31:06.466 Removing: /var/run/dpdk/spdk_pid72962 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73043 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73153 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73251 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73347 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73433 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73507 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73587 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73667 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73770 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73870 00:31:06.466 Removing: /var/run/dpdk/spdk_pid73971 00:31:06.466 Removing: /var/run/dpdk/spdk_pid74059 00:31:06.466 Removing: /var/run/dpdk/spdk_pid74133 00:31:06.466 Removing: /var/run/dpdk/spdk_pid74213 00:31:06.466 Removing: /var/run/dpdk/spdk_pid74288 00:31:06.466 Removing: /var/run/dpdk/spdk_pid74397 00:31:06.466 Removing: /var/run/dpdk/spdk_pid74488 00:31:06.466 Removing: /var/run/dpdk/spdk_pid74643 00:31:06.466 Removing: /var/run/dpdk/spdk_pid74934 00:31:06.466 Removing: /var/run/dpdk/spdk_pid74976 00:31:06.466 Removing: /var/run/dpdk/spdk_pid75435 00:31:06.466 Removing: /var/run/dpdk/spdk_pid75627 00:31:06.466 Removing: /var/run/dpdk/spdk_pid75729 00:31:06.466 Removing: /var/run/dpdk/spdk_pid75841 00:31:06.466 Removing: /var/run/dpdk/spdk_pid75894 00:31:06.466 Removing: /var/run/dpdk/spdk_pid75920 00:31:06.466 Removing: /var/run/dpdk/spdk_pid76229 00:31:06.466 Removing: /var/run/dpdk/spdk_pid76300 00:31:06.466 Removing: /var/run/dpdk/spdk_pid76386 00:31:06.466 Removing: /var/run/dpdk/spdk_pid76814 00:31:06.466 Removing: /var/run/dpdk/spdk_pid76963 00:31:06.466 Removing: /var/run/dpdk/spdk_pid77774 00:31:06.466 Removing: /var/run/dpdk/spdk_pid77923 00:31:06.466 Removing: /var/run/dpdk/spdk_pid78138 00:31:06.726 Removing: /var/run/dpdk/spdk_pid78246 00:31:06.726 Removing: /var/run/dpdk/spdk_pid78570 00:31:06.726 Removing: /var/run/dpdk/spdk_pid78830 00:31:06.726 Removing: /var/run/dpdk/spdk_pid79192 00:31:06.726 Removing: /var/run/dpdk/spdk_pid79397 00:31:06.726 Removing: /var/run/dpdk/spdk_pid79538 00:31:06.726 Removing: /var/run/dpdk/spdk_pid79607 00:31:06.726 Removing: /var/run/dpdk/spdk_pid79753 00:31:06.726 Removing: /var/run/dpdk/spdk_pid79787 00:31:06.726 Removing: /var/run/dpdk/spdk_pid79850 00:31:06.726 Removing: /var/run/dpdk/spdk_pid80073 00:31:06.726 Removing: /var/run/dpdk/spdk_pid80309 00:31:06.726 Removing: /var/run/dpdk/spdk_pid80766 00:31:06.726 Removing: /var/run/dpdk/spdk_pid81224 00:31:06.726 Removing: /var/run/dpdk/spdk_pid81692 00:31:06.726 Removing: /var/run/dpdk/spdk_pid82209 00:31:06.726 Removing: /var/run/dpdk/spdk_pid82358 00:31:06.726 Removing: /var/run/dpdk/spdk_pid82452 00:31:06.726 Removing: /var/run/dpdk/spdk_pid83084 00:31:06.726 Removing: /var/run/dpdk/spdk_pid83149 00:31:06.726 Removing: /var/run/dpdk/spdk_pid83625 00:31:06.726 Removing: /var/run/dpdk/spdk_pid83995 00:31:06.726 Removing: /var/run/dpdk/spdk_pid84507 00:31:06.726 Removing: /var/run/dpdk/spdk_pid84634 00:31:06.726 Removing: /var/run/dpdk/spdk_pid84692 00:31:06.726 Removing: /var/run/dpdk/spdk_pid84756 00:31:06.726 Removing: /var/run/dpdk/spdk_pid84812 00:31:06.726 Removing: /var/run/dpdk/spdk_pid84876 00:31:06.726 Removing: /var/run/dpdk/spdk_pid85066 00:31:06.726 Removing: /var/run/dpdk/spdk_pid85153 00:31:06.726 Removing: /var/run/dpdk/spdk_pid85219 00:31:06.726 Removing: /var/run/dpdk/spdk_pid85293 00:31:06.726 Removing: /var/run/dpdk/spdk_pid85333 00:31:06.726 Removing: /var/run/dpdk/spdk_pid85406 00:31:06.726 Removing: /var/run/dpdk/spdk_pid85525 00:31:06.726 Clean 00:31:06.727 14:11:59 -- common/autotest_common.sh@1453 -- # return 0 00:31:06.727 14:11:59 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:06.727 14:11:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:06.727 14:11:59 -- common/autotest_common.sh@10 -- # set +x 00:31:06.986 14:11:59 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:06.986 14:11:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:06.986 14:11:59 -- common/autotest_common.sh@10 -- # set +x 00:31:06.986 14:11:59 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:06.986 14:11:59 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:06.986 14:11:59 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:06.986 14:11:59 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:06.986 14:11:59 -- spdk/autotest.sh@398 -- # hostname 00:31:06.986 14:11:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:07.246 geninfo: WARNING: invalid characters removed from testname! 00:31:33.821 14:12:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:36.380 14:12:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:38.289 14:12:31 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:40.198 14:12:33 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:42.735 14:12:35 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:44.644 14:12:37 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:46.553 14:12:39 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:46.553 14:12:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:46.553 14:12:39 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:46.553 14:12:39 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:46.553 14:12:39 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:46.553 14:12:39 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:46.553 + [[ -n 5254 ]] 00:31:46.553 + sudo kill 5254 00:31:46.563 [Pipeline] } 00:31:46.579 [Pipeline] // timeout 00:31:46.585 [Pipeline] } 00:31:46.599 [Pipeline] // stage 00:31:46.604 [Pipeline] } 00:31:46.617 [Pipeline] // catchError 00:31:46.626 [Pipeline] stage 00:31:46.628 [Pipeline] { (Stop VM) 00:31:46.640 [Pipeline] sh 00:31:46.924 + vagrant halt 00:31:49.464 ==> default: Halting domain... 00:31:56.048 [Pipeline] sh 00:31:56.329 + vagrant destroy -f 00:31:58.865 ==> default: Removing domain... 00:31:59.446 [Pipeline] sh 00:31:59.729 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:31:59.737 [Pipeline] } 00:31:59.750 [Pipeline] // stage 00:31:59.754 [Pipeline] } 00:31:59.767 [Pipeline] // dir 00:31:59.772 [Pipeline] } 00:31:59.786 [Pipeline] // wrap 00:31:59.792 [Pipeline] } 00:31:59.804 [Pipeline] // catchError 00:31:59.813 [Pipeline] stage 00:31:59.815 [Pipeline] { (Epilogue) 00:31:59.828 [Pipeline] sh 00:32:00.111 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:05.402 [Pipeline] catchError 00:32:05.404 [Pipeline] { 00:32:05.416 [Pipeline] sh 00:32:05.748 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:06.008 Artifacts sizes are good 00:32:06.017 [Pipeline] } 00:32:06.031 [Pipeline] // catchError 00:32:06.041 [Pipeline] archiveArtifacts 00:32:06.048 Archiving artifacts 00:32:06.160 [Pipeline] cleanWs 00:32:06.172 [WS-CLEANUP] Deleting project workspace... 00:32:06.172 [WS-CLEANUP] Deferred wipeout is used... 00:32:06.179 [WS-CLEANUP] done 00:32:06.181 [Pipeline] } 00:32:06.196 [Pipeline] // stage 00:32:06.202 [Pipeline] } 00:32:06.216 [Pipeline] // node 00:32:06.221 [Pipeline] End of Pipeline 00:32:06.267 Finished: SUCCESS